report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The National Response Framework discusses several elements of effective response and response planning. The term response, as used in the National Response Framework, includes the immediate actions to save lives, protect property and the environment, and meet basic human needs. Response also includes the execution of emergency plans and actions to support short-term recovery. An effective, unified national response—including the response to any large-scale incident—requires layered, mutually supporting capabilities—governmental and nongovernmental. Indispensable to effective response is an effective unified command, which requires a clear understanding of the roles and responsibilities of each participating organization. The National Response Framework employs the following criteria to measure key aspects of response planning: Acceptability. A plan is acceptable if it can meet the requirements of anticipated scenarios, can be implemented within the costs and time frames that senior officials and the public can support, and is consistent with applicable laws. Adequacy. A plan is adequate if it complies with applicable planning guidance, planning assumptions are valid and relevant, and the concept of operations identifies and addresses critical tasks specific to the plan’s objectives. Completeness. A plan is complete if it incorporates major actions, objectives, and tasks to be accomplished. The complete plan addresses the personnel and resources required and sound concepts for how those will be deployed, employed, sustained, and demobilized. It also addresses timelines and criteria for measuring success in achieving objectives and the desired end state. Including all those who could be affected in the planning process can help ensure that a plan is complete. Consistency and standardization of products. Standardized planning processes and products foster consistency, interoperability, and collaboration, therefore, emergency operations plans for disaster response should be consistent with all other related planning documents. Feasibility. A plan is considered feasible if the critical tasks can be accomplished with the resources available internally or through mutual aid, immediate need for additional resources from other sources (in the case of a local plan, from state or federal partners) are identified in detail and coordinated in advance, and procedures are in place to integrate and employ resources effectively from all potential providers. Flexibility. Flexibility and adaptability are promoted by decentralized decisionmaking and by accommodating all hazards ranging from smaller-scale incidents to wider national contingencies. Interoperability and collaboration. A plan is interoperable and collaborative if it identifies other stakeholders in the planning process with similar and complementary plans and objectives, and supports regular collaboration focused on integrating with those stakeholders’ plans to optimize achievement of individual and collective goals and objectives in an incident. Under the Post-Katrina Emergency Management Reform Act, FEMA has responsibility for leading the nation in developing a national preparedness system. FEMA has developed standards—the Comprehensive Preparedness Guide 101—that call for validation, review, and testing of emergency operations plans (EOP),. According to the Comprehensive Preparedness Guide 101, plans should be reviewed for conformity to applicable regulatory requirements and the standards of federal or state agencies (as appropriate) and for their usefulness in practice. Exercises offer the best way, short of emergencies, to determine if an EOP is understood and “works.” Further, conducting a “tabletop” exercise involving the key representatives of each tasked organization can serve as a practical and useful means to help validate the plan. FEMA’s guidance also suggests that officials use functional and full-scale emergency management exercises to evaluate EOPs. Plan reviews by stakeholders also allow responsible agencies to suggest improvements in an EOP based on their accumulated experience. We also identified the need for validated operational planning in the aftermath of Hurricane Katrina, noting that to be effective, national response policies must be supported by robust operational plans. In September 2006, we recommended, among other things, that DHS take the lead in monitoring federal agencies’ efforts to meet their responsibilities under the National Response Plan (now the National Response Framework) and the National Preparedness Goal (now the National Preparedness Guidelines), including the development, testing, and exercising of agency operational plans to implement their responsibilities. DHS concurred with our recommendation. The Post-Katrina Emergency Management Reform Act transferred preparedness responsibilities to FEMA, and we recommended in April 2009 that FEMA should improve its approach to developing policies and plans that define roles and responsibilities and planning processes by developing a program management plan, in coordination with DHS and other federal entities, to ensure the completion of the key national preparedness policies and plans called for in legislation, presidential directives, and existing policy and doctrine; to define roles and responsibilities and planning processes; as well as to fully integrate such policies and plans into other elements of the national preparedness system. FEMA concurred with our recommendation and is currently working to address this recommendation. Other national standards reflect these practices as well. For example, according to Emergency Management Accreditation Program (EMAP) standards, the development, coordination and implementation of operational plans and procedures are fundamental to effective disaster response and recovery. EOPs should identify and assign specific areas of responsibility for performing essential functions in response to an emergency or disaster. Areas of responsibility to be addressed in EOPs include such things as evacuation, mass care, sheltering, needs and damage assessment, mutual aid, and military support. EMAP standards call for a program of regularly scheduled drills, exercises, and appropriate follow-through activities—designed for assessment and evaluation of emergency plans and capabilities—as a critical component of a state, territorial, tribal or local emergency management program. The documented exercise program should regularly test the skills, abilities, and experience of emergency personnel as well as the plans, policies, procedures, equipment, and facilities of the jurisdiction. The exercise program should be tailored to the range of hazards that confronts the jurisdiction. We reported in April 2009 that FEMA lacked a comprehensive approach to managing the development of emergency preparedness policies and plans. Specifically, we reported that FEMA had completed many policy and planning documents, but a number of others were not yet completed. For example, while DHS, FEMA, and other federal entities with a role in national preparedness have taken action to develop and complete some plans that detail and operationalize roles and responsibilities for federal and nonfederal entities, these entities had not completed 68 percent of the plans required by existing legislation, presidential directives, and policy documents as of April 2009. Specifically, of the 72 plans we identified, 20 had been completed (28 percent), 3 had been partially completed (that is, an interim or draft plan has been produced—4 percent), and 49 (68 percent) had not been completed. Among the plans that have been completed, FEMA published the Pre-Scripted Mission Assignment Catalog in 2008, which defines roles and responsibilities for 236 mission assignment activities to be performed by federal government entities, at the direction of FEMA, to aid state and local jurisdictions during a response to a major disaster or an emergency. Among the 49 plans that had not been completed were the National Response Framework incident annexes for terrorism and cyberincidents as well as the National Response Framework’s incident annex supplements for catastrophic disasters and mass evacuations. In addition, operational plans for responding to the consolidated national planning scenarios, as called for in Homeland Security Presidential Directive 8, Annex 1, remained outstanding. In February 2010, DHS’s Office of Inspector General reviewed the status of these planning efforts and reported that the full set of plans for any single scenario had not yet been completed partly because of the time required to develop and implement the Integrated Planning System. The Integrated Planning System, required by Annex 1 to Homeland Security Presidential Directive 8 (December 2007), is intended to be a st comprehensive approach to national planning. The Directive calls for the Secretary of Homeland Security to lead the effort to develop, in coordination with the heads of federal agencies with a role in homeland security, the Integrated Planning System followed by a series of related andard and planning documents for each national planning scenario. The Homeland Security Council compressed the 15 National Planning Scenarios into 8 key scenario sets in October 2007 to integrate planning for like events and to conduct crosscutting capability development. The redacted version of the Inspector General’s report noted that DHS had completed integrated operations planning for 1 of the 8 consolidated national planning scenarios—the terrorist use of explosives scenario. FEMA officials reported earlier this month that the agency’s efforts to complete national preparedness planning will be significantly impacted by the administration's pending revision to Homeland Security Presidential Directive-8. Once the new directive is issued, agency officials plan to conduct a comprehensive review and update to FEMA’s approach to national preparedness planning. In addition to FEMA’s planning efforts, FEMA has assessed the status of catastrophic planning in all 50 States and the 75 largest urban areas as part of its Nationwide Plan Review. The 2010 Nationwide Plan Review was based on the 2006 Nationwide Plan Review, which responded to the need both by Congress and the President to ascertain the status of the nation’s emergency preparedness planning in the aftermath of Hurricane Katrina. The 2010 Nationwide Plan Review compares the results of the 2006 review of states and urban areas' plans, functional appendices and hazard-specific annexes, on the basis of: Consistency with Comprehensive Preparedness Guide 101, Date of last plan update, Date of last exercise, and A self-evaluation of the jurisdiction’s confidence in each planning document’s adequacy, feasibility and completeness to manage a catastrophic event. FEMA reported in July 2010 that more than 75 percent of states and more than 80 percent of urban areas report confidence that their overall basic emergency operations plans are well-suited to meet the challenges presented during a large-scale or catastrophic event. Oil spills are a special case with regard to response. For most major disasters, such as floods or earthquakes, a major disaster declaration activates federal response activities under the provisions of the Robert T. Stafford Disaster Relief and Emergency Assistance Act. However, for oil spills, federal agencies may have direct authority to respond under specific statutes. Response to an oil spill is generally carried out in accordance with the National Oil and Hazardous Substances Pollution Contingency Plan. The National Response Framework has 15 functional annexes, such as search and rescue, which provide the structure for coordinating federal interagency support for a federal response to an incident. Emergency Support Function #10, the Oil and Hazardous Materials Response Annex, governs oil spills. As described in Emergency Support Function #10, in general, the Environmental Protection Agency is the lead for incidents in the inland zone, and the U.S. Coast Guard, within DHS, is the lead for incidents in the coastal zone. The difference in responding to oil spills and the shared responsibility across multiple federal agencies underscores the importance of including clear roles, responsibilities, and legal authorities in developing operational response plans. In conclusion, Mr. Chairman, emergency preparedness is a never-ending effort as threats evolve and the capabilities needed to respond to those threats changes as well. Realistic, validated, and tested operational response plans are key to the effective response to a major disaster of whatever type. Conducting exercises of these plans as realistically as possible is a key component of response preparedness because exercises help to identify what “works” (validates and tests) and what does not. This concludes my statement. I will be pleased to respond to any questions you or other members of the committee may have. For further information on this statement, please contact William O. Jenkins, Jr. at (202) 512-8757 or JenkinsWO@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Stephen Caldwell, Director, Chris Keisling, Assistant Director, John Vocino, Analyst-In-Charge, Linda Miller, Communications Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Among the lessons learned from the aftermath of Hurricane Katrina was that effective disaster response requires planning followed by the execution of training and exercises to validate those plans. The Federal Emergency Management Agency (FEMA) is responsible for disaster response planning. This testimony focuses on (1) criteria for effective disaster response planning established in FEMA's National Response Framework, (2) additional guidance for disaster planning, (3) the status of disaster planning efforts, and (4) special circumstances in planning for oil spills. This testimony is based on prior GAO work on emergency planning and response, including GAO's April 2009 report on the FEMA efforts to lead the development of a national preparedness system. GAO reviewed the policies and plans that form the basis of the preparedness system. GAO did not assess any criteria used or the operational planning for the Deepwater Horizon response. FEMA's National Response Framework identifies criteria for effective response and response planning, including (1) acceptability (meets the requirement of anticipated scenarios and is consistent with applicable laws); (2) adequacy (complies with applicable planning guidance); (3) completeness (incorporates major actions, objectives, and tasks); (4) consistency and standardization of products (consistent with other related documents); (5) feasibility (tasks accomplished with resources available); (6) flexibility (accommodating all hazards and contingencies); and (7) interoperability and collaboration (identifies stakeholders and integrates plans). In addition to the National Response Framework, FEMA has developed standards that call for validation, review, and testing of emergency operations plans. According to FEMA, exercises offer the best way, short of emergencies, to determine if such plans are understood and work. FEMA's guidance also suggests that officials use functional and full-scale emergency management exercises to evaluate plans. Other national standards reflect these practices as well. For example, the Emergency Management Accreditation Program standards call for a program of regularly scheduled drills, exercises, and appropriate follow-through activities, as a critical component of a state, territorial, tribal, or local emergency management program. GAO reported in April 2009 that FEMA lacked a comprehensive approach to managing the development of emergency preparedness policies and plans. Specifically, GAO reported that FEMA had completed many policy and planning documents, but a number of others were not yet completed. In February 2010, the Department of Homeland Security's (DHS) Office of Inspector General reviewed the status of these planning efforts and reported that the full set of plans for any single scenario had not yet been completed partly because of the time required to develop and implement the Integrated Planning System. The Integrated Planning System, required by Annex 1 to Homeland Security Presidential Directive 8 (December 2007), is intended to be a standard and comprehensive approach to national planning. Oil spills are a special case with regard to response. The National Response Framework has 15 functional annexes that provide the structure for coordinating federal interagency support for a federal response to an incident. Emergency Support Function #10--Oil and Hazardous Materials Response Annex--governs oil spills. Under this function, the Environmental Protection Agency is the lead for incidents in the inland zone, and the U.S. Coast Guard, within DHS, is the lead for incidents in the coastal zone. This difference underscores the importance of including clear roles, responsibilities, and legal authorities in developing operational response plans. GAO is not making any new recommendations in this testimony but has made recommendations to FEMA in previous reports to strengthen disaster response planning, including the development of a management plan to ensure the completion of key national policies and planning documents. FEMA concurred and is currently working to address this recommendation.
The percentage of older workers—those age 55 and older—is growing faster than that of any other age group, and they are expected to live and work longer than past generations. Labor force participation for this cohort has grown from about 31 percent in 1998 to 38 percent in 2007. In contrast, labor force participation of workers under age 55 has declined slightly. (See fig. 1.) Many factors influence workers’ retirement and employment decisions, including retirement eligibility rules and benefits, an individual’s health status and occupation, the availability of health insurance, personal preference, and the employment status of a spouse. The availability of suitable employment, including part-time work or flexible work arrangements, may also affect the retirement and employment choices of older workers. As the government’s human capital leader, OPM is responsible for helping agencies develop their human capital management systems and holding them accountable for effective human capital management practices. One such practice is strategic workforce planning, which addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals, and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve these goals. In developing these strategies, we have reported that leading organizations go beyond a succession-planning approach that focuses on simply replacing individuals. Rather they engage in broad, integrated succession planning and management efforts directed to strengthening both current and future organizational capacity. In implementing its personnel policies, the federal government is required to uphold federal merit system principles in recruiting, engaging, and retaining employees. Among other provisions, the merit system principles require agencies to recruit and select candidates based on fair and open competition, as well as treat employees fairly and equitably. Federal agencies can recruit skilled or experienced workers, many of whom tend to be older, to fill positions requiring demonstrated expertise. OPM is also responsible for administering retirement, health benefits, and other insurance services to government employees, annuitants, and beneficiaries. It develops implementing regulations when Congress makes new options available to federal employees and often takes the lead in advocating for new legislative options. We and others have highlighted the need to hire and retain older workers to address the challenges associated with an aging workforce. In so doing, we have called upon the federal government to assume a leadership role in developing strategies to recruit and retain older workers. At our recommendation, Labor convened a task force composed of senior representatives from nine federal agencies and issued its first report in February of this year. The report provides information on strategies to support the employment of older workers, strategies for businesses to use that leverage the skills of an aging labor pool, individual opportunities for employment of older workers, and legal and regulatory issues associated with work and retirement. While the task force’s focus was the private sector, some of the strategies it identified are relevant for federal agencies as well, for example, providing flexible work arrangements and customized employment options that include alternative work schedules and part time work. As we and other organizations have previously reported, the federal workforce is aging, with an increasing number of employees eligible to retire. Many of the workers who are eligible to retire in the 24 CFO agencies are in executive and supervisory positions, as well as in mission- critical occupations. These trends point to the importance of strategic workforce planning to help agencies forecast, with greater precision, who might retire, when they might retire, and the impact of their retirement on an agency’s mission, and, using this information, develop appropriate strategies to address workforce gaps. The percentage of older workers and workers nearing retirement eligibility varies across the federal government, with some agencies employing more such workers than others. For example, as shown in figure 2, the percentage of career federal employees in the 24 CFO agencies age 55 or older ranged from about 9 percent at the Department of Justice to about 38 percent at the Small Business Administration and Department of Housing and Urban Development in fiscal year 2007. Importantly, about than one-third of federal workers will be eligible to retire over the next 5 years. Thirty-three percent of federal career employees on board as of the end of fiscal year 2007 will be eligible to retire during fiscal years 2008 through 2012. In comparison, about 20 percent of federal career employees on board as of the end of fiscal year 1997 were projected to be retirement-eligible during fiscal years 1998 through 2002. Many workers projected to become retirement eligible by 2012 are concentrated in certain agencies. As figure 3 shows, the agency rates range from a low of 20 percent at the Department of Homeland Security (DHS) to a high of 46 percent at four agencies—the Agency for International Development (AID), the Department of Housing and Urban Development, the Small Business Administration (SBA), and the Department of Transportation (Transportation). Estimated retirement eligibility rates will also likely vary at the component level—that is, by bureau or unit. Thus, even those agencies that have relatively low overall percentages of retirement eligible employees may have components that have higher percentages of retirement-eligible staff, which, in turn could affect the accomplishment of mission tasks and strategic goals for agency components and for the agency as a whole. Certain occupations have particularly high rates of workers eligible to retire. As shown in figure 4, 50 percent or more of the workers in 24 of the 315 occupations with 500 or more staff we reviewed are eligible to retire by 2012. Several of these occupations, such as air traffic controllers, customs and border protection interdiction agents, and administrative law judges are considered mission critical. Federal law requires mandatory retirement at specified ages for some occupations, such as air traffic controllers who must retire at age 56. Retirement eligibility will be especially pronounced among the agencies’ executives and supervisors (see fig. 5). Of the approximately 7,200 career executives as of the end of fiscal year 2007, 64 percent are projected to be retirement eligible by 2012, up from 41 percent in 2008. For supervisors who are not career executives, 45 percent will be eligible to retire by 2012. Although most federal employees do not retire immediately upon becoming eligible, the increasing number of employees becoming retirement eligible in the near future points to the need for agencies to examine how these trends will affect them. In so doing, agencies should take into account location, occupation, and grade level of potential retirees, and develop appropriate succession planning strategies, which may include retaining older, experienced workers, to address the likely impact. At the same time, it will be important for agencies to consider their organization’s strategic goals and the critical skills and competencies needed to meet future program outcomes, and to develop strategies to address any gaps in the number, skills, and competencies needed to achieve those outcomes. Federal agencies have a range of flexibilities at their disposal to help them engage and retain mission critical staff, including older, experienced workers. SSA, an agency that stands to lose a relatively large proportion of its experienced workforce in the upcoming retirement wave, is using many of the available governmentwide flexibilities to address workforce shortages. Some other federal agencies have developed alternative approaches that facilitate hiring and retaining older workers to meet their workforce needs. Despite the availability of recruitment, retention, and other flexibilities, agencies still face human capital challenges. In our past work, we found that agencies are not always able to make full use of available flexibilities, in part because they did not fully understand them. As the government’s human capital leader, OPM sets human capital policies for federal agencies. In many cases, OPM serves as the gatekeeper by approving or disapproving an agency’s request to use some hiring authorities or other flexibilities. Other authorities do not require OPM’s approval. While only one of the available flexibilities is largely focused on older workers—dual compensation waivers—many, such as flexible and part time work, are particularly appealing to older workers. OPM does not specifically target older workers in its policies. Instead, efforts are focused on workers who possess the right skills and experience to meet agencies’ workforce needs, without regard to age. Governmentwide Hiring Authorities. Under existing rules and regulations, agencies have the authority to use a number of different approaches to hire workers, including older workers, without obtaining OPM approval. Among other approaches, they include using temporary appointments for short-term needs; temporary assignees from state and local governments, colleges and universities, Indian tribal governments, and qualified not-for-profit agencies under the Intergovernmental Personnel Act (IPA); commercial recruiting firms and nonprofit employment firms; consultants for temporary or intermittent employment; contractual arrangements: commercial temporary help for brief periods of time, and contracts for services, as long as the contracts follow federal procurement regulations. Agencies can also use additional hiring authorities by obtaining OPM’s approval or by notifying OPM on a case-by-case basis of their intent. These include the following: Dual compensation waivers to rehire federal retirees—OPM may grant waivers allowing agencies to fill positions with rehired federal annuitants without offsetting the salaries by the amount of the annuities. Agencies can request waivers on a case-by-case basis for positions that are extremely difficult to fill or for emergencies or other unusual circumstances. Agencies can also request from OPM a delegation of authority to grant waivers for emergencies or other unusual circumstances. These waivers by their nature are largely for older workers; Special authority to hire for positions in contracting— agencies can rehire federal annuitants to fill positions in contracting without being required to offset the salaries. Agencies are required only to notify and submit their hiring plans to OPM; On-the-spot hiring without competition—this may be used in circumstances where (1) public notice has been given and (2) OPM has determined there is a severe shortage of candidates or a critical hiring need, such as in certain medical or information security occupations, or occupations requiring fluency in Middle Eastern languages; Veterans’ recruitment appointment—agencies may hire certain veterans without competition to fill positions up to the GS-11 level or for disabled veterans at any level; Enhanced annual leave computation—agencies may credit relevant private sector experience when computing annual leave amounts. Flexible Schedules and Workplaces. Flexible schedules and workplaces are often extremely important to older workers. For example, some research indicates older workers want to set their own hours and to be able to take time off to care for relatives when needed. In addition, older workers nearing retirement may prefer a part-time schedule, for example, as a means to retire gradually. Federal agencies have work schedule flexibilities that they can provide to their workers without prior approval from OPM. These include Part-time schedules—allow employees to work on a part-time basis; however, part-time work late in a career may result in a lower retirement annuity and may affect other benefits. Flexible work schedules—allow full-time or part-time employees to determine their hours of work within established parameters or modify the typical work schedule of 8 hours a day, 5 days per week by, for example, permitting schedules of 10 hours per day, 4 days per week. Alternative work sites—allow employees to work from home or outside the traditional office. Compensation Flexibilities. Agencies also have authority to provide additional compensation to employees. These include giving recruitment, relocation, and retention incentives to employees; raising the pay rate of critical positions in an agency (requires OPM approval after consultation with OMB); providing certain eligible physicians allowances of up to $30,000 per year. This flexibility requires OMB approval; and using special rates, premium pay, and other compensation flexibilities to recruit and retain specified health care employees, under a delegation agreement with OPM. One of the agencies we reviewed—the Social Security Administration (SSA)—is particularly at risk of losing a substantial portion of its workforce at a time when it will experience unprecedented growth in demand for its services. SSA faces looming shortages particularly in some mission critical positions, despite enhancing its human capital planning efforts. The agency is experiencing service delays, even at current staffing levels. For example, SSA had over 750,000 disability claims awaiting a decision at the hearing level at the end of January 2008, leading to an average waiting time in fiscal year 2008 of almost 500 days. These backlogs could increase in the future, given SSA’s demographic profile. Indeed, according to SSA’s data, about 40 percent of its current workforce will be eligible to retire by 2012, and by 2017 that proportion will grow to more than half. The prospects for hiring new employees to replace retirees over this timeframe remains unclear. According to SSA officials, while the fiscal year 2008 budget provided sufficient funding to allow SSA to replace all of the employees that leave—and to add additional employees in some areas—budgetary constraints have prevented the agency from hiring enough new workers in the past and future hiring will depend upon the resources available. Even when SSA can hire, the agency faces stiff competition from the private sector, especially for jobs in the accounting, legal, and information technology fields. The potential impact of retirements varies across the agency and will affect occupations SSA deems critical. By 2012, the proportion of those eligible to retire will generally range from 25 percent to 59 percent across such occupations. However, the proportion of administrative law judges eligible to retire is far higher—about 86 percent will be eligible by 2012 and nearly all by 2017. Moreover, SSA stands to lose a substantial portion of its leadership staff. Of its 138 Senior Executive Service (SES) members, 62 percent are now eligible to retire and nearly 80 percent will be eligible by 2012. Of the over 6,000 supervisors, nearly 39 percent are currently eligible to retire and 57 percent could retire by 2012. SSA has increasingly used information technology solutions in an effort to improve its human capital management. For example, to better understand where to place its human capital emphasis, SSA has developed an approach that uses historical data to project future retirements. The model projects who is likely to retire, and SSA uses these projections to estimate gaps in mission-critical positions and to identify regional and headquarters components most affected. With these estimates the agency develops action plans focused on recruiting and hiring, retention, and staff development. SSA has also developed user-friendly automated tools to help job applicants complete the applications process. This system keeps both applicants and managers advised of where the application is in the process. SSA has used a variety of strategies to hire and retain older workers, some of which include the human capital flexibilities available to all federal agencies. For example, SSA offers recruitment, relocation, and retention bonuses to individuals with needed skills and considers an employee’s private sector experience when computing annual leave status. The agency also offers workplace flexibilities, such as alternative work sites and flexible work schedules, seminars on dependent care for elderly family members and children, and learning opportunities such as midcareer and pre-retirement seminars. In addition, SSA provides the options of a phased retirement program—allowing employees to work on a part time basis during the years immediately prior to retirement—and a trial retirement program—allowing workers to return to work within a year of retiring if they repay the annuity they’ve received. However, SSA officials told us that the programs have been rarely used because of the financial penalty workers would face. Under a delegation of authority from OPM, SSA has used dual compensation waivers to hire experienced retirees to fill staffing shortages in critical areas when these shortages constituted an emergency. From November 2000 through December 2006, SSA used this authority to rehire nearly 1,300 federal annuitants who were knowledgeable and proficient at processing complex, difficult workloads and who needed little or no training to fill mission critical workforce gaps. This waiver authority expired in 2006 and was not renewed. However, SSA has been granted a waiver to reemploy administrative law judges to help reduce the disability hearings backlog. In addition, SSA was granted a waiver to rehire federal annuitants to provide relief in areas affected by hurricanes Katrina and Rita. SSA officials reported that when they have been allowed to use dual compensation waivers, their experiences were extremely positive. Beyond using existing flexibilities, SSA has developed recruitment efforts that reach out to a broader pool of candidates, some of whom are older workers. For example, SSA began recruiting retired military and disabled veterans in 2002 because of its commitment to helping veterans. SSA is also beginning to reach out to older workers in order to achieve its diversity goal of attracting a multigenerational workforce. These steps have included developing recruiting material featuring images of older and younger workers. In addition to SSA, we interviewed officials in several other agencies and learned that some have developed their own approaches to hiring and engaging older workers. Identifying and recruiting retirees with critical skills by using technology. The Department of State has developed two databases to match interested Foreign Service and Civil Service retirees with short- term or intermittent job assignments that require their skill sets or experiences. One database—the Retirement Network, or RNet— contains a variety of information, including individuals’ job experiences, foreign language abilities, special skills, preferred work schedule, and favored job locations. To identify individuals with specific skill sets, officials match information from RNet with another database that organizes and reports all available and upcoming short- term job assignments. For instance, in 2004, the agency identified current and retired employees familiar with Sumatra’s culture and language and sent many of them to Indonesia to help with the tsunami relief efforts. According to officials, this technology has allowed them to identify individuals with specialized skills and specific job experience within hours. Before these systems were in place, the search for specific individuals would have taken days or weeks, and even then, the list of individuals would have been incomplete. Because different personnel rules apply to Foreign Service and Civil Service positions, the agency typically brings Civil Service retirees on as contractors—nonfederal employees without any reduction to earnings or annuities—and may hire Foreign Service retirees as federal employees who may earn their full salaries and annuities. Hiring older workers through nonfederal approaches. The Environmental Protection Agency (EPA) has designed a program that places older workers (55 years and over) in administrative and technical support positions within EPA and other federal and state environmental agencies nationwide. Instead of hiring older workers directly into the government as federal employees, EPA has cooperative agreements with nonprofit organizations to recruit, hire, and pay older workers. Under these agreements, workers are considered program enrollees, not federal employees. EPA’s Senior Environmental Employment (SEE) program started as a pilot project in the late 1970s, and was authorized by the Environmental Programs Assistance Act in 1984. According to EPA, many SEE enrollees come from long careers in business and government service, offer valuable knowledge, and often serve as mentors to younger coworkers. Depending on their skills and experience, program enrollees’ wages vary, starting at $6.92 per hour and peaking at $17.29 per hour. Using the SEE program as a model, the Department of Agriculture’s Natural Resources Conservation Service (NRCS) recently developed a pilot project called the Agriculture Conservation Enrollees/Seniors (ACES) program. Officials from both EPA and NRCS told us that their programs are crucial in helping agencies meet workload demands and providing older workers with valuable job opportunities. Partnering with private firms to hire retired workers. In partnership with IBM and the Partnership for Public Service, the Department of the Treasury is participating in a pilot project that aims to match the talent and interest of IBM retirees and employees nearing retirement with Treasury’s mission-critical staffing needs. Working together, the three organizations are designing a program that intends to send specific Treasury job opportunities to IBM employees with matching skill sets and experience; help create streamlined hiring processes; provide career transition support, such as employee benefits counseling and networking events; and encourage flexible work arrangements. Officials are developing the pilot project within existing governmentwide flexibilities that do not require special authority from OPM. As one official suggested, designing such a project may reveal the extent to which existing federal flexibilities allow new ways of hiring older workers. Engaging older workers through mentoring. Three of the agencies we spoke with are in the beginning stages of formalizing mentoring programs, recognizing that mentoring relationships help pass down knowledge to less experienced workers as well as engage older workers. According to one official at NRC, fostering mentoring relationships is essential for his agency. Not only do these relationships help to transfer knowledge to less experienced workers, they also help senior-level staff build strong professional relationships with junior employees. Despite the availability of recruitment, retention, and other flexibilities, agencies still face human capital challenges. In our past work, we found that agencies were not always able to make full use of available flexibilities, in part because they did not fully understand the range of flexibilities at their disposal and did not always educate managers and employees on the availability and use of them. In addition, we found that inadequate funding and weak strategic human capital planning contributed to the limited use of these flexibilities. Moreover, OPM did not take full advantage of its ability to share information about when, where, and how the broad range of flexibilities are being used and should be used to help agencies meet their human capital management needs. Several federal hiring procedures can also pose challenges and make it difficult for the federal government to compete with private sector employers. Cumbersome federal hiring procedures can make it difficult for older workers, as well as all other applicants, to get a federal job. Recently, the Partnership for Public Service reported that older workers face a number of challenges to getting a federal job. Many of the issues they raise affect applicants of all age groups. For example, for many of the jobs open to the public, too little time was allowed between the time the job was announced and the deadline for submitting a complete application. For this and other reasons, a majority of older workers they surveyed thought that applying for a federal job is difficult. To address the cumbersome and complicated federal hiring process, the report recommended that federal agencies publicize federal jobs beyond the internet, make federal job announcements and the application process more user-friendly, and keep applicants informed of their application status. Furthermore, the dual compensation rule, which requires annuitants’ salaries to be offset by the amount of the annuities they receive, can create a financial disincentive for retirees wishing to return to federal service. Unless federal agencies receive authority to grant waivers of this rule, retired federal workers—unlike private sector or military retirees—would be working for reduced rates of pay. Other rules make it difficult to use flexibilities. For example, federal employees close to retirement may face reductions in the monthly retirement benefit if they choose to work part- time—or use a phased retirement approach—near the end of their careers. Overall, the federal government is making progress toward becoming a model employer of older, experienced workers. Research has identified characteristics of work environments that older workers value that can serve as a model for federal agencies as they seek to hire and retain older workers. While there remain opportunities for improvement, OPM and Congress are taking action to minimize the challenges faced in hiring and retaining older, experienced workers. As we discussed earlier, individual agencies have strategies that they can use to address some challenges to employing older workers, but actions from OPM, as the government’s human capital leader, and Congress play pivotal roles in changing the landscape of rules regarding federal employment. OPM has taken steps to provide agencies with guidance on how to use available flexibilities and authorities. For example, in the fall of 2005, OPM put a hiring flexibilities decision support tool online to assist agencies in assessing which flexibilities would best meet their needs. This Hiring Flexibilities Resource Center provides in-depth information on a variety of flexibilities, including Direct Hire and Excepted Service. In January 2008, OPM updated its handbook on federal flexibilities and authorities, also online, which provides more information on both recruiting and retention strategies agencies can use to promote the positive aspects of the federal government. Responding to past findings about the length of time it takes to hire new employees, OPM has established a strategic goal to improve recruitment and retention, including faster hiring, hiring more veterans, rehiring annuitants, and faster application processing. OPM has also encouraged federal agencies to consider a broad spectrum of employees as they seek to meet workforce needs. According to OPM, federal human capital managers are facing increasing competition in attracting and retaining talent. To meet this challenge, OPM has developed a new approach, called the Career Patterns initiative, for bringing the next generation of employees into federal positions. In its Career Patterns guidance to agencies, OPM identifies, among other things, scenarios that describe the particular characteristics of 10 types of individuals, including students, new professionals, experienced professionals, and retirees who could help broaden the pool of potential employees for federal jobs. OPM has also recently proposed legislation to Congress that would enhance agencies’ ability to hire and retain older workers. One proposal would give agencies the authority to grant dual compensation waivers without having to obtain OPM approval. The proposal calls for a yearly and lifetime limitation on how many hours an individual can be reemployed. A second proposal would increase the ability of employees to work on a part time basis during their pre-retirement years by changing the way the CSRS annuities based on part time services are computed. Congress has taken action to address the retirement risks in selected occupations. For example, in 2006, Congress enacted legislation allowing federal agencies to hire federal retirees to fill acquisition-related positions without requiring annuity offset. The authority to use this provision expires on December 31, 2011. Agencies are required to consult with OPM and OMB in developing their plans for reemploying annuitants in acquisition-related positions and to submit their plans to OPM. OPM officials indicated that they have reviewed and approved 9 agencies’ plans for consistency with the statutory criteria for reemploying annuitants in those positions. In addition, bills incorporating OPM’s proposals related to dual compensation waivers and part time employment are currently pending in Congress. Although the availability of flexibilities generally varies by agency, agencies already have the authority to offer a number of the flexibilities and practices older workers find appealing in an employer. For example, AARP—one of the largest interest groups for individuals age 50 and older—developed criteria to evaluate an organization’s performance in recruiting, engaging, and retaining this cohort and has, since 2001, used these criteria to identify the best employers for workers over 50. Features of these criteria, as described by AARP, overlap with many of the flexibilities that federal employment offers. Their availability and usage in the federal government are shown in table 1. One area where the government is particularly strong is the availability of health benefits. Federal agencies offer health benefits and pay a sizable portion of the cost. Health insurance may be an especially attractive recruitment and retention tool to older workers since employees generally need only 5 years of federal service with health insurance while employed to carry that benefit into retirement. The federal government also offers retirement benefits and life insurance, and provides the opportunity for employees to purchase long-term care insurance. In other areas, agencies generally have the authority to offer a particular practice, but their actual usage varies by agency depending on the agency’s needs, the nature of the work and other factors. For example, with respect to alternative work arrangements, the federal government offers a number of programs that help employees balance their personal and professional responsibilities, including telecommuting, part time employment, and flexible schedules. A looming retirement wave could result in a possible “brain drain” for federal agencies unless they employ effective succession planning strategies—strategies that include hiring younger workers as well as retaining older workers who possess needed talents and skills and who can share these talents and skills with the next generation. Importantly, the government already has a number of tools in its human capital tool kit that provide agencies with flexibility they need to recruit and retain older workers. Moreover, federal employment offers a range of benefits and employment flexibilities, such as the availability of part-time employment that can make these jobs particularly attractive to older workers, as well as all demographic groups. Still, while the federal government is making progress toward becoming a model employer of older workers, opportunities for improvement remain. First, our demographic analysis shows that while the federal workforce is aging, and an increasing number of employees will become eligible for retirement in the coming years, these trends will have varying degrees of impact within particular agencies. As a result, it will be important for agencies to fully understand and forecast the implications of these trends so they can implement timely and appropriate succession strategies. Second, as certain federal agencies have developed alternative approaches to hire and engage older workers, it will be important for agencies to share lessons learned from these practices governmentwide. And third, despite the availability of employment flexibilities and practices that allow employees to balance their professional and personal responsibilities, individual agencies do not always take advantage of them. Consequently, it is incumbent upon OPM to continue to work with federal agencies to help ensure they have (1) the flexibilities they need to attract and retain a diverse workforce that includes older workers; (2) the awareness of these flexibilities, as well as guidance and technical assistance needed to implement them; and (3) mechanisms to share best practices and lessons learned in attracting and retaining older workers. Becoming a model employer is a shared responsibility and it will be important for agencies to, among other actions, take advantage of available flexibilities and guidance on their use and communicate this information throughout the agency. Collectively, these measures will help make federal agencies more competitive in the labor market for all demographic groups. Mr. Chairman, Ranking Member, and Members of the Committee, this concludes our prepared statement. We would be pleased to respond to any questions that you may have. For questions regarding this testimony, please call Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security at 202-512-7215 or Robert N. Goldenkoff, Director, Strategic Issues at 202-512-6806. Other individuals making key contributions to this statement include Dianne J. M. Blank and Belva M. Martin, Assistant Directors; Nicholas Alexander; Jessica Botsford; Clifton G. Douglas, Jr.; Karin Fangman; Cheri L. Harrington; Isabella Johnson; Mary Y. Martin; Rachael C. Valliere; Kathleen D. White; and Greg Wilmoth. Our objectives were to describe (1) the age and retirement eligibility trends of the current federal workforce; (2) the strategies federal agencies are using to recruit, hire, and retain older workers; and (3) our observations on how these strategies position federal agencies to engage and retain older workers. To describe demographic trends relating to the retirement eligibility and aging of the federal workforce, we analyzed information on the 24 CFO agencies from OPM’s human resource reporting system, the Central Personnel Data File (CPDF). We analyzed data on the age, retirement eligibility, occupations, projected retirement rates, and other characteristics of the career federal workforce. We used the following variables: agency, occupation, date of birth, service computation date, pay plan/grade, and supervisory status. All these variables were 97 percent or more reliable. Using the CPDF information, we analyzed the age distribution of career federal employees at CFO agencies by age groupings (under 40, 40-54, and 55 and over). We also analyzed the percentage of career federal employees on board at the end of fiscal year 2007, who would be eligible to retire from fiscal years 2008 to 2012, and the percentage of workers eligible to retire in occupations where the retirement rates exceeded the governmentwide average. As a proxy for those occupations that may be at risk due to high retirement eligibility rates, we selected occupations with 500 or more employees as of the end of fiscal year 2007 that exceeded the governmentwide rate of 33 percent by 50 percent or more. For this report, we defined older workers as those 55 and older. To address this objective, we reviewed strategies available to federal agencies to hire and retain older workers through governmentwide hiring authorities and other flexible arrangements, we interviewed officials at OPM and other selected federal agencies, including Labor, and we reviewed previous GAO work relating to older workers and federal human capital strategies. We also reviewed relevant laws, literature on hiring and retaining older workers, and documents on federal human capital flexibilities. As a case study, we selected the Social Security Administration (SSA), to describe the strategies that one federal agency has taken to hire and retain older workers. We selected SSA because it is at high risk of losing a large proportion of mission-critical employees and because it risks losing these employees at the same time the need for its services will peak. We interviewed human capital officers at selected agencies to collect such information as: the extent to which agency officials think they will be affected by retirements over the next decade; the information agencies have used and presently use to guide the identification of the probable effects of retirement and how agencies determine appropriate courses of action; strategies that have been taken or will be taken regarding recruiting, hiring, and retaining older workers; and characteristics and scope of these strategies, including human capital authorities or special legislation regarding flexibilities that are available to them. To identify and describe alternative strategies, we performed a literature search and conducted interviews with government and private sector experts. The strategies that we describe were developed by the Departments of Agriculture, HUD, and State, the Internal Revenue Service (IRS), Environmental Protection Agency (EPA), and Nuclear Regulatory Commission (NRC). We selected these strategies because they were developed internally to met specific agency needs but could be used elsewhere in the government. To address this objective, we reviewed previous GAO work and conducted interviews with federal officials at OPM and selected agencies and experts at AARP and other organizations that serve older individuals to determine how the federal government is progressing toward becoming a model employer. We chose the AARP criteria for selecting the best employers for workers over age 50 to compare to characteristics of federal employment because of its exclusive focus on older workers. We conducted our work from November 2007 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal workforce, like the nation's workforce as a whole, is aging. As experienced employees retire, they leave behind critical gaps in leadership and institutional knowledge, increasing the challenges government agencies face in maintaining a skilled workforce. We and others have emphasized the need to hire and retain older workers as one part of a comprehensive strategy to address expected labor shortages. The Office of Personnel Management (OPM), as the government's central personnel management agency, is responsible for helping agencies manage their human capital. The Chairman of the Senate Special Committee on Aging asked GAO to discuss (1) the age and retirement eligibility trends of the current federal workforce, (2) the strategies federal agencies are using to hire and retain older workers, and (3) our observations on how these strategies position federal agencies to engage and retain older workers. To address these objectives, we analyzed demographic data from OPM's Central Personnel Data File, and interviewed officials at OPM and selected federal agencies. OPM is taking action to address past recommendations related to better assisting agencies in using personnel flexibilities. GAO is making no new recommendations at this time. Governmentwide, about one-third of federal career employees on board at the end of fiscal year 2007 are eligible to retire between now and 2012. Many of these workers are concentrated in certain agencies. For example, nearly half of employees on board at the end of fiscal year 2007 at the Departments of Housing and Urban Development and Transportation, and at the Agency for International Development and the Small Business Administration, will be eligible to retire by 2012. The proportion of workers eligible to retire is also expected to be high in certain occupations, including those considered mission critical, such as air traffic controllers and customs and border protection agents, where more than half of the employees will be eligible at that time. Retirement eligibility will be especially pronounced among the agencies' executives and supervisors--over 60 percent of career executives are projected to be eligible by 2012. Federal agencies have a variety of flexibilities at their disposal to help them recruit and retain older workers, including using temporary hires to address short-term needs and rehiring retired federal workers. However, we found that agencies have not always been aware of the full range of available flexibilities. One agency we reviewed--the Social Security Administration--is particularly at risk of losing a substantial portion of its workforce to retirement and has used a variety of strategies to hire and retain older workers, including offering recruitment, retention, and relocation bonuses. Other agencies, such as the Environmental Protection Agency, have developed alternative approaches to attract experienced workers to meet their mission needs. Moreover, certain governmentwide flexibilities, such as flexible and part-time schedules, while not focused directly toward older workers, are particularly attractive to them. Overall, the federal government already has a number of characteristics that appeal to all employees and is making progress toward becoming a model employer of older, experienced workers. For example, federal employees can telecommute, work flexible hours, and receive health and retirement benefits that older workers find especially attractive. OPM and Congress are taking steps to minimize some challenges that agencies face, but opportunities for improvement remain. For example, OPM has developed online decision support tools to provide agencies with guidance on how to use available hiring flexibilities and retention strategies. Congress has legislation pending that incorporates OPM's proposals to enhance agencies' ability to hire and retain older workers by giving agencies the authority, without OPM approval, to rehire retirees without penalty. Agencies have a shared responsibility to pursue the full range of flexibilities and authorities available and to communicate this information within their own agencies. Collectively, these measures will help make federal agencies more competitive in the labor market for all demographic groups.
To apply for disability compensation, a veteran submits a claim to VA’s Veterans Benefits Administration (VBA). A Veterans Service Representative at one of VA’s regional offices reviews the claim and assists the veteran in gathering required evidence, including military service records and medical treatment records from VA facilities and private providers. A Veteran Service Representative’s responsibilities may include establishing claims files, generating notification letters to veterans, assisting veterans in obtaining the evidence needed to support their claims, and assisting in processing appeals of claim decisions. A Rating Veterans Service Representative (RVSR) then evaluates the evidence and, if the RVSR finds the veteran to be eligible, determines the percentage rating for purposes of compensation. Disability compensation varies with the degree of disability and the number of a veteran’s dependents, and is paid monthly. Monthly base benefits in 2011 for an individual range from $123 for 10 percent disability to $2,673 for 100 percent disability. A veteran may cite multiple medical issues in a claim, for example, post-traumatic stress disorder, knee impairment, and hearing loss. The RVSR may grant all, some, or none of the issues in a claim. The veteran can obtain help by submitting a claim and navigating the process with a VSO representative, private attorney, or agent accredited by VA to assist veterans. In fiscal year 2010, VA received over one million disability compensation claims, a 46 percent increase from fiscal year 2007. Regional office review of notice of disagreement (initial appeal stage). If a veteran disagrees with the regional office’s initial decision on a claim, he or she appeals by submitting a written notice of disagreement to the regional office. (See fig. 1 for a flowchart of appeals process at the VA.) A veteran may choose a traditional review or a DRO review by responding to an appeal process request letter sent by VA. VA sends this letter to a veteran after receiving his or her notice of disagreement; the letter informs the veteran of the DRO review and traditional review options, requests the veteran choose one, and states how to obtain representation if the veteran does not already have it. If the veteran chooses the traditional review, the reviewer, who may be a RVSR or DRO, examines the claim file and any new evidence that the veteran submits and may hold a formal, transcribed hearing with the veteran. The reviewer may overturn the original decision based only on (1) new evidence or (2) a clear and unmistakable error made in the original decision. However, if a veteran chooses the DRO review, a DRO conducts a de novo review of the claim, meaning a new and complete review without deference to the original decision, and can revise that decision without new evidence or a clear and unmistakable error—in other words, based on a difference of opinion. In addition to formal hearings, DROs may hold informal conferences with the veteran or the veteran’s representative to discuss an appeal, including why benefits already awarded are appropriate. Ultimately, in either process, the reviewer may: (1) award a full grant, in which all claimed benefits are awarded at the maximum level; (2) award a partial grant, in which some benefits are granted but not necessarily at the maximum level or for all claimed medical issues; or (3) confirm the original decision on the claim, in which no further benefits are granted on any issues. If a full grant is awarded, the appeal ends. Otherwise, the regional office issues a written explanation to the veteran in a statement of the case. Regional office preparation of appeal for the Board. Regardless of whether a DRO or a traditional review is chosen, if a veteran disagrees with the regional office’s appeal decision, he or she may file what VA calls a substantive appeal to the Board, which is processed initially by the regional office. The substantive appeal is a document that the veteran completes to explain the issues being appealed and why the veteran believes the VA decided his or her case incorrectly. After receiving this document and reviewing any additional supporting evidence provided by the veteran, the regional office may award additional benefits. A veteran may submit new evidence multiple times and each submission requires the regional office to re-evaluate the claim. When the regional office determines no further work is necessary, it certifies the appeal as complete and transfers it to the Board. Board review of appeal. When the Board receives the file, it may grant or deny the claim. If the Board finds it cannot make a decision until the regional office does additional work (e.g., requesting a more recent medical exam) it sends, or remands, the case back to the regional office or to the VBA Appeals Management Center in Washington, D.C., which develops evidence and adjudicates the claim. A veteran dissatisfied with the Board’s decision can appeal, in succession, to the U.S. Court of Appeals for Veterans Claims, to the Court of Appeals for the Federal Circuit, and finally to the Supreme Court of the United States. VA created the DRO position and implemented the DRO review process nationally in 2001 after a 1997–1998 pilot in 12 regional offices. In the pilot, DROs were responsible for performing reviews of all appeals and had the ability to make a new decision based solely on a difference of opinion with the original decision. VA promulgated final regulations when it implemented DRO review nationwide in 2001, which made the process optional and required the veteran to expressly choose it. Later in 2001, VA headquarters issued guidance to the regional offices expanding the responsibilities of DROs to include mentoring and training other disability claims staff and working with regional office managers to identify error trends and training needs. Since fiscal year 2003, when VA started tracking DRO involvement in appeals, data show more veterans have chosen a DRO review than a traditional review for appeals of decisions on their disability compensation claims. From fiscal years 2003 through 2010, veterans chose a DRO review in 534,439 appeals, or 61 percent of all appeals filed over this time period, according to Board data. The percentage of appeals in which a DRO review was chosen increased each year, from 54 percent of all appeals in fiscal year 2003 to 65 percent in fiscal year 2010 (see fig. 2). However, there was significant variation across VA’s regional offices. For example, during the 8-year time period, the regional office in Columbia, South Carolina, had the lowest percentage of DRO reviews chosen—32 percent—and San Diego, California, the highest—87 percent. More than half of the regional offices had the DRO review chosen in at least 50 percent of all appeals filed. Our review suggests a key factor in the choice of a DRO review is whether a veteran has assistance from a third party, such as a VSO representative or a private attorney. Sixty-three percent of all appeals filed by veterans with representation chose a DRO review compared to 44 percent of those filed by veterans without representation across all regional offices. Controlling for selected factors that may affect the choice of a DRO review, veterans with representation are still more likely to choose a DRO review for their appeals. Additionally, we found that representation may also be one of the factors that contribute to the variation in the percentage of veterans choosing a DRO review across regional offices. As the percentage of appeals with representation increased, the percentage of DRO reviews also increased across regional offices. However, representation alone does not fully explain the variation in DRO reviews chosen across regional offices. Other factors that we did not analyze, such as demographic factors, may contribute to the variation across regional offices. Representatives may recommend one appeal option over another to their veterans. Of the 40 veterans we interviewed, 22 told us that representatives from a VSO or state or county agency helped them decide between the DRO and the traditional review, and 17 of the 22 said that the assistance they received was very or somewhat helpful in deciding between the appeal options. Most VSO representatives we interviewed said they usually recommend a DRO review because they believe it is faster or because a DRO may be more likely to grant the claim. Some also said they recommend a DRO review over a traditional review because a DRO will thoroughly review all the evidence and a veteran has the opportunity to make his or her case to the regional office. VSO representatives at one regional office we visited said that they recommend a DRO review if they believe, based on experience, that the DRO likely to be assigned the case is more open to working with them. However, some VSO representatives also said that they recommend a traditional review under certain circumstances. For example, if an appeal has ambiguous evidence, they may recommend a traditional review because it may result in an appeal going to the Board faster and because they believe the Board has more flexibility than a DRO to grant benefits in such cases. Also, VSO representatives at one site visit told us they generally recommend a traditional review because in their view a DRO review takes too much time. Of all appeals filed since fiscal year 2003, 53 percent included the fact that they chose the DRO review option on the notice of disagreement. According to VSO representatives we interviewed, because veterans’ representatives have power of attorney, they can choose an appeal option at the same time they file an appeal for a veteran rather than waiting for the veteran to receive VA’s appeal process request letter. A DRO review may be chosen on the same day an appeal is filed in order to save time because the regional office does not have to send the appeal process request letter and wait for the veteran’s response before beginning its review. Veterans filing 13 percent of appeals chose an appeal option without assistance from a representative, and these veterans may not fully understand their options as described in the VA appeal process request letter, some VA officials and VSO representatives told us. The VBA manual provides a template of the letter for the regional offices, which prepare and mail it to a veteran after receiving the notice of disagreement. VA headquarters officials told us that while regional offices have flexibility to add information to the template letter, they cannot exclude any information, and headquarters prefers that the regional offices use the template letter. (For VA’s template letter, see app. II.) Some VA officials and VSO representatives said veterans may not be able to make an informed decision based on the letter alone. Specifically, 29 of 55 regional office managers we surveyed (53 percent) answered that they believed no or only some veterans could make an informed decision using only the VA letter. The other regional office managers we surveyed (47 percent) said that all or most veterans could make an informed decision using only the VA letter (see fig. 3). Of the DROs we spoke with, 10 of 17 said that in their view, no or only some veterans could make informed decisions based on the letter alone, and most VSO representatives at the regional offices we visited said that few or no veterans could do so. Representatives from one VSO told us veterans may phone after receiving the letter because they do not understand its description of appeal options. Some managers in our survey and VA staff we interviewed suggested changes to the template, for example, shortening its length, using simpler language and less jargon, using graphics or tables to aid understanding, explaining the benefits of representation, and advising veterans they can request informal or formal hearings if they choose a DRO review. They also said the letter could better explain differences between DRO and traditional reviews, particularly in the treatment of evidence and the authority given a DRO to render a new decision without additional evidence. Our analysis of the template in the VBA manual, based on criteria established for federal government communication with the public, found the letter did not clearly explain appeal options and the appeal process. Although we found the letter met several criteria for clear communication by federal agencies and programs, such as simple structure, concise headings, and avoidance of complex language overall, we found it did not meet some other criteria, including:  defining terms that could be unfamiliar to the veteran, such as “decision review officer,” “de novo reviews,” “clear and unmistakable error,” and “substantive appeal”;  highlighting key deadlines for responding by including them at the beginning of the letter, so the veteran is clear that VA needs a response within 60 days; and  using tables or graphics to explain appeal options and enhance recipient’s understanding (see fig. 4). One of the regional offices we visited made changes to the template that address some of these issues, for example, including instructions on how to request a hearing and italicizing important information, such as the 60- day deadline for a veteran to respond with an appeal choice. Of the 40 veterans we interviewed, 14 remembered receiving and were able to describe the appeal process request letter, and 9 of those told us that the letter explained the appeal options only somewhat clearly or not at all clearly. Five said it explained the options very clearly. Some of the veterans we spoke with offered suggestions on how VA could improve the letter, for example, by providing a point of contact for veterans with questions, explaining when they might next hear from VA about their appeal, and more thoroughly describing the two appeal options. VA headquarters officials said VBA management and attorneys reviewed the letter template but they did not test it with focus groups of veterans because VBA does not have contracts to conduct focus groups. According to federal guidelines for clear communication with the public, testing documents should be an integral part of an agency’s plain- language planning and writing process. Veterans who chose a DRO review were more likely than those who chose a traditional review to get additional disability compensation benefits after initial review of their notices of disagreement. From fiscal years 2003 through 2010 at least some additional benefits were awarded at the regional level in 32 percent of DRO reviews compared to 23 percent of traditional regional office reviews, according to our analysis of Board data (see fig.5). We found that having a DRO review was associated with a greater chance of being awarded some additional benefits even after controlling for some other factors that may affect appeal outcomes, such as whether a veteran had a representative or the regional office in which the appeal was filed. Veterans awarded additional benefits at this appeal stage receive these benefits immediately even if they continue their appeals, according to a VA official. DRO reviews took about a month longer than traditional reviews to complete, or 266 days compared to 235 days, on average, for the time period from fiscal years 2003 through 2009 (see fig. 6). It is possible DRO reviews take longer because they involve a more thorough examination of the evidence. For example, one DRO we spoke with said such reviews take longer because they require a completely fresh look at all evidence in the original claim. Regardless of whether veterans choose a DRO or a traditional review, their waiting time for a decision on the notice of disagreement is considerably less than if they continue their appeal to the Board. For appeals filed from fiscal years 2003 through 2007, the time from filing of the notice of disagreement to Board decision was more than 1,000 days on average. Although veterans choosing a DRO review were more likely to gain some additional benefits at the regional office level, in comparison to a traditional review, DRO review does not appear to reduce the number of appeals continuing to the Board—which was an important goal for VA in introducing this appeal option. The resolution rate at the regional office level for appeals in which a DRO review was selected has been about the same as the rate for traditional reviews. Of the 593,526 appeals of disability claim decisions receiving either DRO or traditional reviews from fiscal years 2003 through 2008 that had a final regional office outcome, 72 percent ended at the regional office level. The remainder continued to the Board. For appeals in which a DRO review was selected, 71 percent ended at the regional office level compared to 73 percent of appeals in which a traditional review was chosen.The resolution rate for DRO and traditional reviews was roughly the same in each year during this 6-year period (see fig. 7). Resolution rates varied across VA’s regional offices. For both DRO and traditional reviews, resolution rates ranged from less than 50 percent to more than 80 percent in individual offices, with the majority having rates of more than 70 percent for both types of reviews. VA officials offered several possible explanations for the variation across offices, for example, the extent to which VSO representatives are proactive in submitting necessary evidence. The negligible difference between appeal resolution rates at the regional office level for DRO and traditional reviews in part reflects veterans’ decisions on how to proceed if they do not receive a full grant of requested benefits. Although a slightly higher percentage of DRO reviews resulted in full grants of benefits for veterans, veterans choosing DRO review were less likely than those choosing traditional review to end their appeals when not granted full benefits. Board data show that for appeals filed from fiscal years 2003 through 2008 the most common reason for resolution at the regional office level was a veteran’s failure to return a form to continue the appeal (failure to respond) after receiving VA’s explanation of its decision and of what steps to take to move the appeal to the Board (see fig. 8). On this front, traditional reviews were somewhat more likely than DRO reviews to be resolved through veterans’ failure to respond—41 percent versus 36 percent, respectively. The second most common reason for appeal resolution at the regional level was a full grant of benefits, which automatically ends the process. In contrast to the first issue, DRO reviews were somewhat more likely than traditional reviews to be resolved through a full grant of benefits—21 percent versus 17 percent, respectively. Most decisions to grant full benefits were made at the initial appeal stage, while some were made during the stage in which the regional office prepares the appeal for the Board. The remaining appeal resolutions were due to veterans or their representatives withdrawing their appeals by contacting VA or to the death of the veteran. While we found no difference in how veterans responded to a partial grant of benefits based on their choice of appeal option, we found that veterans who chose a traditional review appeared to respond differently than those who chose a DRO review when no additional benefits were granted at the initial appeal stage. Board data for fiscal years 2003 through 2008 show that when no additional benefits were granted (i.e., the original decision was upheld), appeals in which a traditional review was chosen were somewhat more likely than those in which a DRO review was chosen to end at the regional office level—57 percent versus 51 percent, respectively. By contrast, in cases that resulted in the award of partial benefits, there was no difference based on which appeal option had been chosen—75 percent of both traditional appeals and DRO appeals were ended at the regional level. Representation may be a factor in why veterans selecting a traditional review are more likely to end their appeals after receiving no additional benefits from the regional office. Fifty-nine percent of appeals which received no additional benefits at the initial appeal stage and in which the veteran had no representative were ended by the veteran without going on to the Board, compared to 52 percent of such appeals in which the veteran had a representative. As noted previously, veterans without representation are more likely to choose a traditional review. VA regional office staff told us that certain aspects of the DRO review can be effective in resolving appeals before they continue to the Board. Managers we surveyed rated DROs’ authority to make a completely new (de novo) decision on a claim and to hold informal conferences with veterans and representatives as the most effective tools for resolving appeals (see fig. 9). Of the 17 DROs we interviewed, 15 rated their authority to make a completely new decision as very or moderately effective in resolving appeals, 10 rated informal conferences as very or moderately effective, and 7 rated formal hearings as very or moderately effective. Many DROs in our site visits told us their authority to make a completely new decision on a claim allows them to reverse original decisions without submission of additional evidence by the veteran. DROs also said informal conferences may help resolve appeals because they enable DROs to obtain information from a veteran’s representative or explain to a representative and veteran why no additional benefits can be granted. While many regional staff believe DRO reviews have a greater potential to resolve appeals, we found that the appeal resolution rate for DRO and traditional reviews are quite similar, perhaps in part because VA’s criteria for assessing and rewarding individual DROs’ performance do not always encourage them to use their unique authorities to resolve appeals. VA currently assesses the performance of individual DROs using four criteria: quality of work, productivity, customer service, and timeliness. The existing criteria do not specifically encourage appeal resolution at the regional level, according to 60 percent of surveyed managers and 10 of the 17 DROs we interviewed. For example, managers in one office we visited said the current criteria encourage DROs to complete tasks that help them meet their productivity requirement but may not necessarily lead to appeal resolution. The department is currently revising its criteria for assessing DRO performance based on a 90-day pilot program in eight regional offices conducted in fiscal year 2010, and it expects to implement new criteria nationwide in fiscal year 2012. According to VA officials, the revisions are intended in part to focus DROs’ attention more on resolving appeals at the earliest possible stage of the process. VA officials told us the new criteria under development will better encourage appeal resolution by restructuring the way work credits—needed to meet the productivity requirement—are awarded to DROs. For example, under the existing criteria, work credits are awarded for holding informal conferences, but under the new criteria additional credits would be awarded when the conference results in resolving the appeal. In addition, VA officials said the new criteria aim to promote appeal resolution by more explicitly promoting regular communication with veterans’ representatives. They said that even though such communication can help resolve appeals by helping representatives understand, for example, why further benefits cannot be granted in a particular case, DROs have become less focused on communication over the years due to production pressures. Since the DRO pilot program in 1997–1998, VA has expanded DRO duties to include tasks not related to appeals. According to the estimates of regional office managers who responded to our survey, DROs overall tend to spend the majority of their time on appeal-related tasks, but also spend a significant amount of time on training other staff and performing quality reviews (see fig.10). Time spent on different tasks varies across regional offices. For example, the proportion of time spent on conducting de novo reviews of appeals—that is, a completely new evaluation of a claim without deference to the original decision—ranged from 3 to 70 percent in individual offices, and on formal training ranged from 0 to 40 percent. The four offices we visited typically assigned individual DROs to specific responsibilities, such as reviewing appeals, training other staff, and conducting quality reviews of other staff’s work, rather than having each DRO perform the full range of tasks. Managers in one office said specialization is helpful because it permits DROs to focus on training without the distraction of appeals work. Regional office managers may also assign DROs to different duties at different times to meet changing office needs. For example, one office we visited had recently assigned more DROs to training and quality review to address problems with its RVSRs’ performance, and managers said preliminary results showed an improvement in RVSR quality scores after this reallocation of DRO resources. Since the implementation of the DRO review process in 2001, there have been differing opinions within VA about the proper balance between processing appeals and performing other DRO tasks. In 2006, an internal VA study group that assessed the impact of the DRO review process recommended that VA limit the scope of DRO duties to reviewing appeals and have them perform de novo reviews of all appeals—not just those in which a DRO review was selected. Their report found that there was a sufficient number of DROs to perform de novo reviews of all appeals if DROs’ duties were limited to appeals-related tasks. Officials told us that VA management decided not to implement these recommendations because of concerns that there was an insufficient number of DROs to review all appeals and a belief that other tasks are also important DRO functions. There was also a mix of opinions on this topic among regional office managers we surveyed and interviewed. Almost two-thirds of those surveyed stated that eliminating the DRO review election process and having DROs perform only de novo reviews of all appeals would somewhat or greatly improve their effectiveness. On the other hand, managers said it could be difficult to have their DROs spend more time on such reviews. About 60 percent of surveyed managers said they would prefer to allocate more DRO time to performing de novo reviews, but some respondents noted that they cannot allocate DROs’ time optimally because of staff and resource limitations and the need to assign DROs to other duties. While VA has expanded the role of the DROs so they are contributing to other VA goals beyond appeal resolution, the department has not developed performance measures to assess whether DROs are successful in meeting their original purpose of reducing the number of appeals to the Board. The measures that VA uses to assess national and regional office performance in processing appeals do not include the proportion of appeals resolved at regional offices. Existing VA measures for national and regional office appeals performance involve the number of appeals awaiting a decision, number of days appeals have been pending a decision, and percentage of appeals that are remanded from the Board. We have previously noted that agencies should develop performance measures that are linked to agency goals. Linking performance measures to broader agency goals at each operating level reinforces the importance of these goals and may help managers identify areas where problems exist and corrective action is required. VA already collects some data related to appeal resolution, but some VA officials question the value of a performance measure related to appeal resolution. The department tracks data on the number of appeals resolved at different stages of the appeal process—for example, through the veteran’s failure to respond to the statement of the case or after the veteran’s submission of a substantive appeal to the Board—nationally and by regional office. It makes these data available to regional offices. About half the regional office managers surveyed said an appeal resolution goal at the regional office level would somewhat or greatly enhance DROs’ effectiveness. However, VA officials told us they have not established a performance measure for appeal resolution because this outcome is not entirely under the control of the DRO, as a veteran has a right to continue his or her appeal to the Board. Some VA managers and staff in the offices we visited also expressed a concern that such a goal could have a negative impact. For example, it could push DROs to grant benefits that are not completely justified or to pressure veterans to withdraw their appeals. While we understand this potential concern, we note that VA has quality review procedures in place to ensure the quality and accuracy of DRO decisions. Specifically, VA reviews on average five cases per month that each DRO has worked on. VA assesses each case against several criteria, including whether all claimed issues were addressed, whether all applicable evidence was discussed, and whether all issues were correctly granted or denied. Some regional office managers agreed that setting an appeal resolution goal would not encourage DROs to grant unjustified benefits, because, for example, existing quality review procedures would prevent this from happening. VA has not developed a nationwide training curriculum for DROs to help them learn the duties of the position. The training available to DROs from VA headquarters is the same training offered to RVSRs (staff who evaluate initial claims). VA requires its claims processing staff, including DROs, to take 85 hours of training annually. Headquarters and regional office officials told us DROs typically take the same courses as RVSRs. As part of its training for claims processors, VA has three courses for all regional office staff who process appeals: an orientation to the appeals team, a course on the appeal process, and a course on how to use the Board’s appeals tracking system. VA headquarters officials told us they have not developed training specifically for new DROs because they should already be technical experts in evaluating claims when promoted. However, they acknowledged that the department has not completed an analysis of DRO tasks specifically to determine the training needs of this position. According to generally accepted criteria, a key step in developing successful training in the federal government is for an agency to understand the skills that its workforce needs to achieve agency goals. Furthermore, these criteria stress the need for agencies to incorporate continuous, life-long learning into their training—even experienced staff may still need additional training in certain areas. A number of managers and DROs in the regional offices we visited said the skills DROs learn before promotion may not be sufficient to perform their new tasks. Several managers told us new DROs, based on their prior experience, are indeed technical experts in evaluating claims; however, managers and DROs said new DROs may lack experience with certain responsibilities, such as holding formal hearings and informal conferences, training other staff, processing appeals, and balancing multiple priorities. Regional managers at three of the four offices we visited said the appeals courses offered by VA are not sufficient to teach new DROs how to perform their duties; a manager in one office said these courses provide a general overview and are not primarily used to train them. DROs in the four offices we visited learn their duties primarily on the job and by observing more experienced colleagues, according to managers. However, some offices have developed more formal training for DROs. Forty percent of surveyed regional managers said their office has developed specific training that address such topics as the appeals process, conducting formal hearings and informal conferences, reviewing other staff’s work, and policy and regulatory changes. Despite regional office efforts to provide targeted training, our survey of regional managers found that 93 percent believe a nationally standardized training program for newly promoted DROs would improve their effectiveness, and 16 of the 17 DROs we interviewed also said a national training program would be helpful. Managers in two regional offices we visited said a training program developed by VA’s central office would ensure consistency in how DROs are trained across the nation. Managers responding to our survey indicated that tailored training on a number of topics would enhance the ability of new DROs to perform their job duties (see fig. 11). A formal, centrally designed training program—especially for new DROs—has been proposed within VA. Under this proposal, according to one VA official involved with the management of the DROs nationally, new DROs would receive instruction in areas such as training other staff and communicating with veterans and their representatives. The training might be administered in a central location for new DROs from multiple regional offices, or developed centrally and then administered by regional offices. However, VA does not yet have formal plans to move ahead with the development of such a program, according to VA officials. Review of veterans’ appeals by VA’s DROs has had some positive impacts for veterans, and it is possible that more veterans would choose a DRO review if they fully understood the distinction between the DRO and traditional reviews. Because VA’s outreach letter lacks information and clarity that veterans need to make an informed choice between the options, some—in particular, those without a representative—may not be taking advantage of the DRO review process. However, looking more broadly, VA has not achieved its original goal for the DRO review process of reducing the number of appeals continuing to the Board and thereby shortening the time that veterans wait for appeal decisions. Indeed, while VA is taking steps to place a greater focus on appeal resolution in the criteria it uses to assess the performance of individual DROs, it has yet to establish any national or regional office performance measures related to resolving appeals at the regional level. Certainly, VA must balance competing demands on DROs, senior claims processors in its regional offices, to both resolve appeals and train and supervise less experienced staff, but without a more strategic approach—which includes goals and measures for resolution of appeals—VA does not know it is fully leveraging DRO reviews and, ultimately, their effectiveness for the veterans VA serves. Regardless of the proper balance between their different responsibilities, DROs may not be contributing as much as they could be, either to the goal of appeal resolution or to the goal of developing newer claims processing staff, because they do not always receive training specific to their duties. To clarify information for veterans and ensure the most effective use of DROs, the Secretary of Veterans Affairs should direct the Veterans Benefits Administration to take the following three actions: revise the sample appeals election letter in its policy manual to define unfamiliar terms and emphasize key deadlines, and test any revised letter’s clarity with veterans before implementing it;  establish national and regional office performance measures related to appeal resolution at the regional level and ensure that sufficient quality review procedures are in place to prevent DROs from granting unjustified benefits; and  assess the knowledge and skills that DROs need to perform their varied responsibilities, determine if any gaps exist in the training currently available, and, if necessary, develop a training curriculum or program tailored to DROs. We provided a draft of this report to VA for review and comment. In its written comments (see app. III), VA concurred fully with two of our recommendations and partially with our recommendation regarding an appeal resolution performance measure. The department concurred with our recommendation to revise its sample appeal election letter. It reported that it has formed a workgroup to assess and recommend changes to the appeal election process, potentially including changes to the letter. VA also concurred with our recommendation to assess the knowledge and skills that DROs need and if necessary develop a training curriculum for them. VA said it plans to develop additional training based on the results of skills certification tests administered to DROs and on a job task analysis conducted to aid the design of these certification tests. VA partially concurred with our recommendation to establish national and regional performance measures related to the resolution of appeals at the regional office level. The department noted that—as explained previously in our report—it is revising the criteria used to assess the performance of individual DROs to place greater emphasis on resolving appeals at the earliest stage possible, for example through communication with veterans and their representatives. However, it also stated that resolution of appeals at the regional level should not be a performance measure used to assess the impact of the DROs, for several reasons including (1) the DROs also play an important role in training, mentoring, and quality review, so an appeal resolution measure does not capture their full impact; (2) each veteran has a right to continue an appeal to the Board if dissatisfied with the regional office decision, and whether a veteran continues is beyond a DRO’s control; and (3) establishing an appeal resolution performance measure could encourage the granting of benefits that are not justified. We acknowledge that DROs impact VA operations beyond appeals. Yet VA’s own documents—including its 2006 review of the DRO program—state that the department’s primary goal in establishing the DRO review process was to increase the percentage of appeals resolved before continuing to the Board, and the changes VA is now making to its criteria for measuring individual DRO performance indicate that appeal resolution remains a major goal. We are not recommending that an appeal resolution performance measure replace existing VA performance measures related to appeals or to other VA processes on which DROs could have an impact. Our recommendation is that VA add such a measure to gauge whether the program is meeting its specific intended goal and help the department assess whether further adjustments are needed. We also acknowledge that it is not possible to prevent all appeals from continuing to the Board; such an objective may not even be desirable. However, based on its changes to the criteria for individual DRO performance, VA clearly believes that changes in incentives can encourage behaviors that are likely to resolve more appeals early on. Finally, as VA also noted in its comments, individual DROs must meet rigorous accuracy standards in their appeals work, which should mitigate against the granting of unjustified benefits. However, if VA does not believe that its existing quality control measures are sufficient to guard against the granting of unjustified benefits, we would encourage the department to consider what additional quality control measures it could implement to ensure that the addition of a new performance measure would not have the unintended consequence of increasing the award of unjustified benefits. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until one day from the report date. At that time, we will send copies of this report to the relevant congressional committees, the Secretary of Veterans Affairs, and other interested parties. This report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix IV. We were asked to examine (1) the extent to which veterans choose a Decision Review Officer (DRO) review as opposed to a traditional review, (2) outcomes for veterans who choose a DRO review, and (3) challenges the Department of Veterans Affairs (VA) faces in managing DROs. To answer these objectives, we reviewed relevant laws and regulations, and VA documents including procedural manuals, internal studies, and the sample appeal election letter that VA headquarters provides for its regional offices. We interviewed cognizant officials from the Veterans Benefits Administration (VBA) and the Board of Veterans’ Appeals (Board). We interviewed managers, DROs, and veteran service organization (VSO) staff in 4 regional offices, and we conducted a web- based survey of managers in all of VA’s 57 regional offices. We obtained and analyzed administrative data from the Board on outcomes and processing times for appeals of disability compensation claims. Finally, to learn about veterans’ perspectives on the DRO review election process we conducted phone interviews with a small sample of veterans who had appealed their disability compensation claims. We conducted this review from July 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We analyzed the VBA’s appeal process request letter as a factor that may affect veterans’ decisions about the appeal options. For our analysis we used the sample letter in the VBA manual since VBA central office officials said that regional offices use the sample as a template for the letters they mail to veterans. We selected 16 criteria for clear federal communication using information from the Plain Language Guidelines, a previous GAO report on VBA letter clarity, and the VBA manual (see table 1). We reviewed the letter and came to a consensus about which criteria were met and which were not met. We conducted site visits to 4 of VA’s 57 regional offices: Atlanta, Georgia; Providence, Rhode Island; Salt Lake City, Utah; and Waco, Texas. We conducted these visits to learn about regional office practices in utilizing DROs and regional staff’s opinions on such topics as how veterans choose between appeal options, factors affecting DRO review outcomes, and challenges facing VA in utilizing DROs. In each regional office, we interviewed regional office managers, at least one appeals coach, multiple DROs, and local VSO staff. In total, we interviewed 17 DROs across the 4 offices, with varying levels of experience in their position. In some offices we also interviewed the regional office training coordinator. We judgmentally selected these offices to achieve variation in several factors, including geographic location, number of staff, timeliness of appeal processing for appeals in which DRO was selected, and participation in a pilot study of revised criteria for assessing DRO performance (see fig. 12). What we report about these sites may not necessarily be representative of other VA regional offices. To analyze data on VA’s appellate workload, including the number of appeals in which DRO and traditional reviews were selected, the outcomes for DRO and traditional reviews, and the processing times for DRO and traditional reviews, we obtained record-level appeals data extracted on April 7, 2011, from the Veterans Appeals Control and Locator System (VACOLS). We limited our analysis to appeals of disability compensation claims. We also limited our analysis to original appeals, as opposed to, for example, appeals that had been remanded by the Board. Using the record-level data, we generated nationwide annual data on appeals filed from fiscal years 2003 through 2010. The data for each fiscal year includes all notices of disagreement filed with the VA during that fiscal year. We looked at data back to fiscal year 2003, even though the DRO review process was established in 2001, because the VACOLS data element that identifies appeals in which DRO review was selected was only added to VACOLS at the end of 2002. For some analyses, we excluded data for appeals filed in recent fiscal years because a high proportion of these appeals were still pending some action by the regional office. To assess the reliability of the record-level appeals data, we (1) reviewed documentation on VACOLS including the data dictionary, (2) interviewed Board officials about VACOLS and any data reliability issues, and (3) performed electronic testing. We found the data to be sufficiently reliable for our reporting objectives. To investigate the relationship between DRO review selection and representation, using VACOLS data, we first examined descriptive statistics at the regional office level. We found substantial variation in rates of DRO election, as well as rates of representation among veterans filing appeals, across regional offices. Using linear multiple regression analysis, we estimated that at the regional office level, a one percentage point increase in the proportion of veterans with representation was associated with approximately half a percentage point increase in the rate of DRO election across regional offices, after controlling for fiscal year. However, variation in representation levels across regional offices did not fully explain variation in DRO election across offices. Our next analysis focused on appeals level data. Descriptive data show that, prior to controlling for other factors, veterans with representation were more likely to choose a DRO review than those without representation. Specifically, table 2 shows that approximately 63 percent of those veterans with representation requested a DRO reviews, compared to approximately 44 percent of veterans with no representation. An alternative way to examine the likelihood of DRO election among veterans with and without representation is the odds of requesting a DRO review. Prior to controlling for other factors, the odds that a veteran with no representation requesting a DRO review is defined as the proportion of veterans requesting a DRO review divided by the proportion requesting a traditional review, or 0.79 (44.1/55.9). In comparison, among veterans with representation, the odds of requesting a DRO review were 1.72 (63.3/36.7). To compare the relative proportion of veterans with and without representation that requested a DRO review, we can construct an odds ratio that compares the odds of DRO selection to the reference group of veterans without representation, which is 2.18 (1.72/0.79). In other words, prior to controlling for other factors, the unadjusted odds that a veteran with representation requesting a DRO review were 2.18 times that of the odds that a veteran without representation requesting such review. Although unadjusted odds provide useful summary information, they do not account for multiple other factors that could also affect a veteran’s choice of review. Logistic regression analysis can be used to estimate the odds ratio comparing the likelihood of DRO selection among veterans with and without representation, after also controlling for other factors that could potentially influence the choice of review type. We estimated a logistic regression model of DRO selection controlling for other factors including the fiscal year of the appeal, the regional office of the appeal, whether the veteran filing the appeal had representation, how many distinct medical issues were associated with each appeal, and what type of body systems were included in the issues under appeal. We were not able to control for certain other factors, such as the timing of DRO election, because we lacked comparative data for traditional reviews. There may be systematic differences between veterans who decide not to appeal a ruling and those who appeal a ruling and thus chose between DRO and traditional reviews. Additionally, we lacked information to control for certain factors such as a veteran’s branch of service or demographic characteristics. After controlling for fiscal year, regional office, number of issues in each appeal, and body systems related to the issues under appeal, we found that appeals filed by veterans with representation were substantially more likely than those filed by veterans without representation to elect the DRO review process. Per the last column of table 2, we can see that, after controlling for other factors, the odds that an appeal filed by a veteran with representation underwent a DRO review were approximately two times higher than the odds that an appeal filed by a veteran without representation underwent a DRO review. This result was statistically significant at the p<0.05 level. Other factors were also significantly associated with a veteran’s choice of review. Table 2 shows that even after controlling for representation, appeals filed at different regional offices varied substantially in the odds that they underwent a DRO review. After controlling for other factors, the odds that an appeal elected a DRO review were approximately four times higher in San Diego, California, than the odds at Albuquerque, New Mexico, and the odds that an appeal filed at Columbia, South Carolina, elected DRO were approximately 75 percent lower than those filed in Albuquerque. Additionally, compared to appeals with only one issue, veterans filing appeals with multiple issues had higher odds of electing a DRO review after controlling for other factors. After controlling for other factors, the odds that an appeal elected a DRO review increased significantly over time. For example, the odds that an appeal filed in fiscal year 2010 underwent a DRO review were about 63 percent higher than the odds for an appeal filed in fiscal year 2003. For the most part, appeals that included issues related to some body systems had odds of DRO review similar to appeals that did not mention those systems, although the difference was statistically significant for several issues. As noted previously, we could not control for the full range of variables that could conceivably affect a veteran’s choice of review type, including the timing of DRO election, a veteran’s branch of service or demographic information, or what factors affected whether a veteran decided to appeal rather than accept an initial ruling. To the extent that these appeal- specific factors are correlated with both the choice of a DRO review rather than traditional review and other variables in our model, our model estimates may be biased. We tested several alternative specifications of the model, such as reclassifying those whose representation status was unknown to either having or not representation, and found that the direction and magnitude of our results were similar across models. We further confirmed that the model estimates were not sensitive to the inclusion or exclusion of body system or recategorization of the variable measuring the number of issues in an appeal. Overall, while we are confident that our models of the type of review provides some explanatory power and are robust to alternative specifications, they are limited in their ability to substantially explain individual veterans’ decisions to choose a DRO review rather than a traditional review. We used appeals level VACOLS data to examine whether DRO reviews were more or less likely to have a full or partial award granted than traditional reviews. Table 3 shows the likelihood of grant awards by review type (DRO or traditional). Although 22.7 percent of appeals that underwent a traditional review received a full or partial award, 31.5 percent of appeals that underwent a DRO review received a full or partial award. Comparing the odds of a full or partial award for cases that underwent different review processes, the unadjusted odds ratio comparing the odds of an award for a DRO review to those of a traditional review is 1.56. In other words, prior to controlling for other factors that could affect the odds of an award, the odds that a DRO-reviewed appeal received an award were 56 percent higher than those for a traditional- reviewed appeal. We used logistic regression analysis to assess if DRO reviews still had higher odds of receiving a full or partial grant, even after controlling for selected factors that could affect the likelihood of an award grant. We controlled for fiscal year; regional office; number of issues in each appeal, and body systems related to the issues under appeal; and veterans’ representation status. After controlling for these factors, we found that DRO-reviewed appeals still were significantly more likely than traditional reviews to receive an award. Specifically, table 3 shows that the odds that an appeal that underwent a DRO review received a full or partial award were still approximately 60 percent higher than an appeal that went underwent a traditional review after controlling for other factors (odds ratio 1.58). Besides a veteran’s choice of appeal option, some other variables were also correlated with higher odds of a receiving an award. For example, appeals filed in certain regional offices were significantly more likely to receive grants than appeals in other offices. We also found that, after controlling for review type and other factors, the odds that an appeal filed by a veteran with representation won a full or partial award were quite similar to that of an appeal filed by a veteran without representation (odds ratio 1.03, or approximately 3 percent higher odds). Furthermore, compared to appeals with only one issue, veterans filing appeals with multiple issues had notably lower odds of receiving a full or partial award after controlling for other factors (between approximately 25 and 50 percent lower than the odds of appeals with one issue). We tested a variety of alternative model specifications with different variables and populations. Given that a large portion of recent cases may not have been resolved as of the time our data were produced, we tested our model excluding the fiscal years 2009 and 2010 to avoid biasing our results towards those cases that could be quickly resolved. We confirmed that a DRO review still had higher odds of an award even after reclassifying the dependent variable into full grant versus partial or no grant. We further confirmed that the model estimates were not sensitive to the inclusion or exclusion of body system or recategorization of the variable measuring the number of issues in an appeal. We also tested a model with an interaction term between representation status and review type. These models confirmed that, compared to traditional cases with or without representation, appeals that went through DRO review were more likely to receive a full or partial grant. Overall, although our models provide some explanatory power and are robust to alternative specifications, we acknowledge that they are limited in their ability to substantially explain the outcome of the award process, especially in light of known omitted variables related to the specific case. To obtain managerial views on the appeal process and use of DROs in VBA’s 57 regional offices, and to understand variations among offices, we conducted a web-based survey of one regional office manager, or assistant manager if so designated by their director, at each office. Survey development. We developed survey questions with input from an official at VA’s Office of Field Operations and GAO subject matter experts on survey design and performance management. To pretest the questionnaire, we conducted in-depth probing interviews and held debriefing sessions with four regional office managers by telephone. Pretest participants were selected to represent variety in regional office sizes and to include offices that participated in VBA’s pilot program for new DRO performance measures. We conducted these pretests to determine if the questions were burdensome or difficult to understand and if they measured what we intended. On the basis of the feedback from the pretests and these other knowledgeable entities, we modified the questions as appropriate. Survey implementation. We obtained e-mail addresses of managers from VA’s Office of Field Operations. This office sent a message to prospective respondents on December 20, 2010, encouraging participation in the upcoming survey. We then began the survey by e- mailing passwords and links to the web-based questionnaire on January 4, 2011. To obtain candid responses and a high response rate, we pledged not to link the responses presented in our report to individual survey participants, and we followed up with two e-mails to initial nonrespondents, the first on January 13, 2011, and the second on January 31, 2011. Additionally, we contacted those managers who had not responded to the survey e-mails by telephone from February 3 through February 18, 2011. We also contacted some respondents by e- mail to clarify unclear, inconsistent, or incomplete responses. We received usable responses from 56 of the 57 managers, for a 98 percent response rate, and ended the survey on February 22, 2011. Analysis of responses. We used computer programs verified to be written correctly by an independent GAO analyst to analyze the responses. We provided respondents with an opportunity to answer several open-ended questions. The responses to those questions were categorized and coded for content by a GAO analyst, while a second analyst verified that the first analyst had coded the responses appropriately. Some comments were coded into more than one category since some respondents commented on more than one topic. As a result, the number of coded items is not equal to the number of respondents who provided comments. Analysis of survey error. Because we identified and selected all 57 regional offices for our survey, our data is not subject to errors due to selecting only a sample or failing to include a portion of the population in the sample. The practical difficulties of conducting any survey may introduce errors due to measurement, nonresponse, or data processing; however, steps taken during survey development, implementation, and analysis minimize the chance of such errors. Additionally, because of the high response rate, the risk of nonresponse error was further minimized, and because the sole nonresponding office was also one of the pretest sites, we were able to determine that the site’s characteristics and pretest answers did not differ greatly from respondents’ answers. Inclusion of what would likely have been their final answers, as determined by their pretest responses, would not materially affect overall results. As a result, we conclude that there is no material risk of nonresponse bias in our survey. We conducted structured phone interviews with veterans to gather information on the factors that affect veterans’ appeals decisions, such as the clarity of the VBA letter appeals process election letter, assistance from VSOs, and other specific reasons for choosing a DRO review or a traditional review. The team worked with a survey methodologist and communication expert in the development of a phone script and interview questions for the veterans, which were pretested with six veterans. The finalized phone script and questions included screening questions to determine if veterans understood the questions or recalled information accurately. To develop our sample, we obtained VACOLS data from the Board on veterans who had filed appeals of their disability compensation decisions between February 1, 2010 and July 31, 2010. The data provided by the Board included the date of the appeal; whether the veteran had selected a DRO or a traditional review; the date on which the DRO review was selected; the veteran’s name; and, when available, the veteran’s address and phone number. The file contained 77,542 unique appeals, which corresponds to something less than 77,542 unique veterans, because one veteran may have filed multiple appeals. We removed from the list all veterans associated with a regional office that was a participant in VA’s Expedited Claim Adjudication pilot project in 2008 since all veterans who took part in this pilot automatically received a DRO review, so our interview questions—which focus on how veterans made the decision to elect DRO review—would not have been relevant for them. Four regional offices were part of this pilot: Nashville, Tennessee; Seattle, Washington; Lincoln, Nebraska; and Philadelphia, Pennsylvania. The final file, without any veterans associated with the Expedited Claim Adjudication sites, contained 71,891 appeals. We drew a random sample of 200 appeals from this final data file. The sample of appeals represented 200 unique veterans. The team used a random sample to protect against selection bias. Before conducting the interviews, we tried to obtain contact information for veterans with missing phone numbers or addresses, but were not able to identify contact information in all cases. When addresses were available, we sent notification letters to veterans before beginning the interviews. We conducted the interviews with veterans from December 2010 to April 2011. When contacting the veterans, the interviewer read the phone script and interview questions and documented the responses. In cases where a spouse or other family member stated that the veteran was not able to participate in the interview, we asked to speak with the person most knowledgeable of the appeal and conducted the interview with this person. We also spoke with surviving spouses or children who were appealing a disability claim of a deceased veteran. We successfully completed interviews with 40 veterans—28 of the veterans had chosen a DRO review of their appeal and 12 had chosen a traditional review. The results of our interviews cannot be generalized to the overall population of veterans who filed appeals between February 2010 and July 2010. (See table 4 for detailed information on the implementation of the phone interviews.) In addition to the contact named above, Shelia Drake, Assistant Director; Lorin Obler; Susan Aschoff; Susan Baker; Jamie Berryhill; Jessica Botsford; Juliann Gorse; Mimi Nguyen; Anna Maria Ortiz; Patricia Owens; Carol Petersen; Carl Ramirez; and Walter Vance made key contributions to this report. Military and Veterans Disability System: Worldwide Deployment of Integrated System Warrants Careful Monitoring. GAO-11-633T. Washington, D.C.: May 4, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Military and Veterans Disability System: Pilot Has Achieved Some Goals, but Further Planning and Monitoring Needed. GAO-11-69. Washington, D.C.: December 6, 2010. Veterans’ Disability Benefits: Expanded Oversight Would Improve Training for Experienced Claims Processors. GAO-10-445. Washington, D.C.: April 30, 2010. Veterans’ Disability Benefits: VA Has Improved Its Programs for Measuring Accuracy and Consistency but Challenges Remain. GAO-10-530T. Washington, D.C.: March 24, 2010. Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. Veterans’ Benefits: Increased Focus on Evaluation and Accountability Would Enhance Training and Performance Management for Claims Processors. GAO-08-561. Washington, D.C.: May 27, 2008. Veterans’ Disability Benefits: VA Can Improve Its Procedures for Obtaining Military Service Records. GAO-07-98. Washington, D.C.: December 12, 2006. Veterans’ Benefits: Further Changes in VBA’s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans’ Disability Benefits: Claims Processing Challenges and Opportunities for Improvements. GAO-06-283T. Washington, D.C.: December 7, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. VA Disability Benefits: Board of Veterans’ Appeals Has Made Improvements in Quality Assurance, but Challenges Remain for VA in Assuring Consistency. GAO-05-655T. Washington, D.C.: May 5, 2005. Veterans’ Benefits: More Transparency Needed to Improve Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-05-47. Washington, D.C.: November 15, 2004. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 2004. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. Veterans Benefits Administration: Clarity of Letters to Claimants Needs to Be Improved. GAO-02-395. Washington, D.C.: April 23, 2002. Human Capital: Practices That Empowered and Involved Employees. GAO-01-1070. Washington, D.C.: September 14, 2001.
The Department of Veterans Affairs (VA) has struggled to provide timely reviews for veterans who appeal decisions on their disability compensation claims. A veteran appeals to the VA regional office that made the initial decision, and if still dissatisfied, to the Board of Veterans Appeals (Board). An appeal to the Board adds more than 2 years, on average, to the wait for a decision on the appeal. To resolve more appeals at the regional level and avoid waits at the Board, VA, in 2001, established the Decision Review Officer (DRO) review as an alternative to the traditional regional office appeal review. A DRO is given authority to grant additional benefits after reviewing an appeal based on a difference of opinion with the original decision. In contrast, under the traditional review, new evidence is generally required for a grant of additional benefits. GAO examined (1) the extent to which veterans choose a DRO review, (2) outcomes for DRO reviews, and (3) VA's challenges in managing DROs. GAO analyzed Board data, surveyed managers in all 57 regional offices, visited 4 offices, and interviewed veterans. According to VA data, which has only tracked DRO involvement since 2003, veterans chose a DRO review in 61 percent (534,439) of all appeals filed from 2003 to 2010. Veterans who sought assistance with their appeal from a veteran service organization or other qualified representatives were more likely to choose a DRO review than those without a representative. Without assistance, veterans may not fully understand their two appeal options. GAO found that the letter VA uses to inform veterans of their options does not highlight key deadlines or differences between the two options. According to more than half of surveyed regional office managers, most veterans could not make an informed choice on the options based just on the letter. The DRO review process has helped some veterans get additional benefits at the regional office level, but has not reduced the percentage of appeals continuing on to the Board--the primary purpose of the program. In fiscal years 2003 through 2008, 21 percent of DRO reviews resulted in a full grant of benefits compared to 17 percent of traditional reviews. A full grant of benefits ends, or resolves, the appeal at the regional level. However, appeals may also be resolved at the regional level if veterans who do not receive full grants decide not to continue their appeal to the Board. VA gave DROs the flexibility to interact informally with veterans in part so they could explain when the benefits already granted are appropriate given the law. However, while DRO reviews led to the grant of full benefits at a higher rate, a higher percentage of veterans not granted benefits through traditional review voluntarily ended their appeals. As a result, in fiscal years 2003 through 2008 the overall percentage of appeals resolved at the regional level was about the same for DRO and traditional reviews--about 70 percent for both. VA faces challenges in how to most effectively use and train DROs. Since the DRO process and position were established, DRO duties have expanded beyond reviewing appeals to performing additional tasks such as quality review. However, VA officials have not reached consensus on how to balance DROs' time among different tasks. VA has no performance goal or measure for appeal resolution at the regional level that could help it determine whether it is achieving the most effective balance between different tasks. In addition, VA headquarters offers no nationwide, standardized training for new DROs, which according to managers and DROs would be beneficial, as they often lack experience with other tasks that DROs frequently perform such as conducting hearings. Ninetythree percent of surveyed regional managers said a nationally standardized training for new DROs would be beneficial. GAO recommends VA (1) revise its appeals election letter, (2) develop an appeal resolution goal at the regional level, and (3) develop a training curriculum on DRO duties. In its comments, VA concurred fully with GAO's first and third recommendations but only partially with the second. VA expressed concerns about an appeal resolution goal, including that it could encourage the unjustified granting of benefits. GAO feels that VA's quality control process minimizes this risk.
States are responsible for developing licensing criteria and health and safety requirements and conducting enforcement activities to ensure providers comply with them and thereby protect the safety and health of children in child care settings. While the federal government’s role is limited in this area, the states must certify that they have safety and health requirements in place and procedures to ensure providers comply with all applicable requirements to receive federal CCDF funds. The Department of Health and Human Services (HHS) oversees CCDF. In fiscal year 2003, the amount of federal funds appropriated for CCDF was $4.8 billion. The CCDF funds are provided to states through a block grant mechanism which allows states to set priorities, establish policies, and spend funds in ways that will help them achieve state child care goals. The Child Care and Development Block Grant statute that underlies the CCDF requires states to designate a lead agency and to establish state plans that commit to establishing and enforcing licensing requirements and health and safety standards for child care providers. The statute leaves it to the states to decide what requirements are to be applicable to specific categories of providers, but for health and safety it sets out three broad parameters that must apply to all providers receiving federal assistance: physical premises safety, staff training, and control of infectious diseases (including childhood immunizations). To qualify for a subsidy provided with funds from the CCDF, a provider must comply with these and any other applicable state requirements. To be eligible for CCDF subsidies, parents must be working, in training, or attending an educational program and have children less than 13 years of age. States can set income eligibility limits at or below 85 percent of the state median income level. States must use a significant portion of their CCDF funds for families receiving Temporary Assistance for Needy Families (TANF) or families who are at risk for becoming dependent upon public assistance. CCDF has five goals: (1) encouraging maximum flexibility in developing state child care programs; (2) promoting parental choice; (3) encouraging states to provide consumer education information to parents; (4) helping states provide child care to parents trying to become independent of public assistance; and (5) helping states implement health, safety, licensing, and registration standards established in state regulations. However, CCDF does not specify how the states should meet these goals and only requires them to be included as elements within the state child care plan. States can also determine which enforcement activities apply to different types of child care. Types of child care eligible to receive funding under TANF and the block grant are described in table 1. States regulate those providers over which they have authority by setting requirements, including those focusing on health and safety, that such child care providers must meet and by enforcing these requirements. State child care licensing offices are responsible for enforcing such health and safety requirements, and engage in different oversight activities for licensed providers and for regulated nonlicensed providers. Licensed providers are generally subject to standard oversight, which generally includes background checks, inspections, technical assistance and training, and the application of sanctions when providers are found to be out of compliance. Regulated nonlicensed providers receive less intense oversight. Such oversight can include self-certification that health and safety standards have been met, background checks, and sanctions. Providers who are legally exempt by state law from oversight are not regulated by state child care licensing offices. For example, in certain states, child care centers run by religious organizations may be exempt. In addition, under federal regulations, the states are not required to apply health and safety requirements to certain providers who only care for a child to whom they are related. Some states may conduct background checks and inspections of their legally exempt providers. States vary as to which types or subtypes of child care providers fall under each level of oversight. When establishing which providers will be required to obtain licenses, and hence be subject to standard oversight, and which will be required to comply with less intense requirements, states consider a number of factors. For example, states consider whether certain providers are subject to other health and safety regulating authorities, such as school- based programs that must conform to all health and safety requirements required for public schools and are overseen by the state education department. States also consider the staffing resources they would need to carry out the necessary licensing and enforcement activities in light of their budget, how to target resources toward the greatest number of children, and the impact that licensing activities could have on child care availability and parental choice. For example, some providers might pass the costs of conforming with licensing requirements on to parents, which could make such care too expensive for some parents. Currently, states are making choices about the extent to which they license certain providers and about which enforcement activities apply to different types of providers. There are no specified federal requirements for licensing and enforcement activities, so many states rely on recommendations made by professional organizations in developing their own standards. Professional organizations recommend standards for several critical licensing and enforcement activities, including background checks, monitoring visits, training for licensing staff, and caseload size. Table 2 outlines recommended practices for critical licensing and enforcement activities. In 2003, the median caseload size for inspectors decreased, but most licensing and enforcement activities for licensed providers remained about the same as those in 1999. Almost a quarter of the states maintained caseloads at recommend levels, and fewer states had caseloads levels over 150. Most states conducted compliance inspections at least once a year, which met or exceeded the recommended level for all types of providers. However, most states do not require the amount of training recommended by national professional organizations. Most states continued monitoring exempt providers by distributing information about health and safety requirements and asking these providers to certify that they complied with state requirements, although some states engaged in more intense monitoring practices for these providers. In 2003, almost a quarter of the states maintained caseloads at or below the recommended level of 75 facilities to one inspector, and the number doing so had increased by 1 since 1999. Specifically, in 1999, 11 states were meeting the recommended caseload levels, and that number increased to 11 states and the District of Columbia by 2003. The median number of facilities per inspector decreased during this time, from 118 facilities per inspector to 110 facilities per inspector, but the number of facilities per inspector was still well above the recommended level of 75 in 2003. In 1999, caseloads ranged from 52 in Missouri to 333 in Colorado, while in 2003, caseloads ranged from 35 per inspector in Hawaii to 600 per inspector in Iowa. Although the range was broader in 2003 than in 1999, more states decreased the size of their caseloads than increased them. Between 1999 and 2003, the caseload in 23 states and the District of Columbia decreased while the caseload in 16 states increased. The number of states with particularly high caseloads, over 150, dropped from 17 to 15, between 1999 and 2003. Among the states whose caseloads decreased were several states with the some of the highest caseloads in 1999. For example, Colorado’s caseload went from 333 to 138 as its licensing budget nearly doubled. Figure 1 illustrates the ranges of caseload levels of licensing staff by state in 2003; see appendix II for a comparison of caseload by state in 1999 and 2003 and appendix III for comparisons of licensing budgets and staffing levels by state in 1999 and 2003. While state data showed 12 states maintaining caseloads at recommended levels, states reported that they were visiting facilities as often as recommended, and in 15 states, more often. States may be managing to visit facilities as often as recommended while maintaining caseloads higher than those recommended, in part because the caseload standard itself is not appropriate for some states. Experts note that this standard does not fully account for time-saving technology that might allow some states to operate effectively with larger than recommended caseloads. Further, licensing staff could be making trade-offs not captured in our survey. For example, staff could be maintaining their schedule for monitoring visits at the expense of performing other duties, including processing applications, providing technical assistance, or documenting inspection visits in a timely manner. According to the data reported through state surveys and NCCIC, one less state exempted family child care providers-–sole caregivers who care for children in a private residence other than the child’s—than were exempted from health and safety requirements in 1999. In 1999, 39 states exempted some family child care providers from regulation, and by 2003, 38 states were exempting some of these types of providers. In addition, fewer states required licensure for some types of centers in 2003 than had in 1999, including religious centers, federal centers, and recreation centers. See table 3. Many states set thresholds at which regulation begins according to the number of children served by different types of providers and exempt from regulation those providers falling below these thresholds. For example, states commonly determine which family child care providers will be regulated based on the number of children in care. Specifically, in a given state, providers caring for 7 or more children in their home might be regulated, while providers caring for 4 children in their home might be exempted from regulation. As figure 2 illustrates, these regulatory thresholds have changed very little since 1999. However, 18 states met or exceeded the recommended practice of regulating providers who cared for 2 or more unrelated children in 2003, whereas 13 were doing so in 1999. Although some changes had occurred since 1999, states continued to conduct compliance inspections in accordance with recommended practices of conducting compliance inspections at least once a year. However, the number of states that visited facilities at least twice a year dropped by about half for all types of settings. According to the data reported to us through our survey, 41 states inspected centers at least annually, and 32 states did so for group homes. Similarly, 28 states conducted inspection visits to family child care homes at least once a year. See figure 3 and appendix IV. The number of states requiring providers to pursue ongoing training increased from 26 in 1999 to 29 in 2003, and as in 1999, the same number of states (4) specifically followed the recommended practice by requiring at least 24 hours of training per year for center directors, and 3 states required this of center teachers. No states required this level of training for family child care providers. The same number of states (32) required licensing staff to have an academic degree in a related field in 1999 and 2003. Similarly, in 1999 and 2003, the same number of states (33) required licensing staff to have work experience in licensing or a related field before being assigned to licensing and enforcement activities. Finally, although very few states had required licensing staff to pass a test in 1999 (7), even fewer required this in 2004 (3). More states informed exempt, subsidized providers about the requirements under the federal grant by sending them a package of information about health and safety requirements in 2003 (32) as reported doing so in 1999 (28). In addition, 17 states reported informing providers by requiring them to attend a short briefing or orientation sessions. Of these, 13 states both sent them a package of information and required attendance at a briefing or orientation. In monitoring providers exempted from regulation who were receiving CCDF funds, states exercised the wide discretion given them by the grant in determining how, or whether, to enforce state and local safety and health requirements for such providers. Some states monitored such providers relatively intensely, while others did not have the authority to do so under state law. For example, some states conducted background checks on exempt providers or inspected such providers. The number of states conducting background checks and inspections for such providers increased since 1999. For 2003, 37 states reported conducting background checks on exempt providers, compared with 20 in 1999. Similarly, for 2003, 12 states reported conducting inspections for such providers compared with 6 in 1999. Finally, for 2003, 9 states reported conducting both background checks and inspections, while 4 states reported doing so in 1999. While 43 states assigned staff to a geographic location, those states using this broad method varied in the specifics of such assignments. Nineteen states also reported assigning staff based on specific job task, such as responding to complaints. Nineteen states reported assigning staff to inspect a particular type of child care facility, such as centers or family homes. Some states used multiple criteria to organize their staff; these states frequently first assigned staff based on geographic location and then relied on secondary criteria, such as type of child care facility. See figure 4. Forty-five states reported using technology to assist them with many aspects of licensing and enforcement activities. States used technology for tracking and monitoring subsidy use, inspecting facilities, making and reconciling payments to providers, linking state and local agencies, maintaining statistics on providers, maintaining statistics on families, and preparing reports to meet federal and state mandates. See figure 5. Technology to complete some of these tasks was available in only a limited number of locations, such as the state licensing office, while for other tasks the technology could be accessed anywhere in the state by authorized users. More states had the ability to maintain statistics on providers and families (24 states) and preparing reports to meet federal and state mandates (20 states) statewide than for any other tasks (tracing and monitoring subsidy use (18 states), making and reconciling payments (17 states), facility inspection checklist (16 states) or linking state and local agencies (12 states)). States we visited have adopted a number of promising practices to assist in their child care licensing and enforcement activities. First, some states used technology in ways that allowed them to streamline their licensing and enforcement processes and to manage parent and provider information particularly effectively. Second, the states we visited had paired frequent inspections with technical assistance to help ensure that child care providers would meet state health and safety regulations. Third, three states we visited had implemented rating systems that differentiate providers by the quality of care they provide. Such systems have helped parents choose child care providers by allowing them to compare the quality of different providers. In addition, these systems have helped states determine what training and technical assistance each provider needs and have offered providers incentives to improve and maintain the quality of their care. Fourth, all the states we visited encouraged partnerships with community organizations to improve the training and education of providers. For example, states partnered with community colleges to connect providers with educational opportunities and with resource and referral agencies to meet the ongoing training needs of providers. States we visited used technology to make licensing and enforcement information more readily available to providers and parents, save time and resources, and track payments and subsidy use. For example, Florida—the state with the most complete, integrated, and up-to-date technology system of the states we visited—used technology to create an online public information system for providers and parents. Providers can use this system to access online information about state policies and regulations, training requirements, how to start a center, and other information. This has helped providers increase professionalism and self- monitoring, according to state officials. Parents can use the system to access the compliance histories of each provider. In addition, some states have used technology to make critical licensing and enforcement activities more efficient. For example, Florida’s Internet- based system allows providers to register for training online and access their transcripts and final examination results. Florida officials also told us that posting provider information online reduced the amount of time child care licensing staff had to spend answering questions from providers and consequently allowed them to spend more time on other enforcement activities. Two states we visited—Florida and North Carolina—have used laptops to make the enforcement process more efficient. Specifically, in these states inspectors were able to enter data into their laptops while conducting inspections rather than having to enter this information into the computer after returning to the office. In addition, inspectors in Florida were able to print a copy of the compliance report for providers while on-site to facilitate discussions of areas needing improvement. The laptops also saved inspectors time by allowing them to quickly access provider information, state child care regulations, noncompliance citations, and inspection forms. Officials in both Florida and North Carolina said that using laptop computers had helped inspectors spend less time on administrative duties and allowed them to spend more time providing on- site technical assistance to providers. The project coordinator in Florida said that this system which only covers child care licensing, was more affordable than other states’ licensing systems, which are part of their larger statewide child welfare information systems. In another state, technology was used to make it easier for providers and parents to manage subsidy payments and reduced the likelihood of fraud or overpayment. Specifically, Oklahoma reported using an electronic benefits transfer (EBT) system to automatically calculate payment rates and to track subsidy use. Because Oklahoma reimburses providers at over 160 different rates depending upon the characteristics of the family and provider receiving a subsidy, the likelihood of error had been significantly higher before the system was implemented, when licensing staff were responsible for identifying the appropriate rate. According to state officials, this system has saved time processing paperwork. Further, the EBT system has allowed parents receiving a child care subsidy to document the time and attendance for their children using an EBT card issued by the state, and providers are automatically reimbursed at the appropriate rate. This automated system has also has also helped officials identify instances of child care fraud and payment abuse, according to state officials. We found that in the states we visited, only those that had recently invested in computer systems, such as Florida, had been able to take advantage of new technologies at an affordable price. In contrast, other states that had not recently invested in computer systems, such as Delaware, had been unable to adopt such promising practices. Specifically, although Delaware had been on the leading edge of using technology almost a decade ago, it had been unable to maintain this advantage. One state official said that it is costly to implement and maintain the technology necessary to support facilities inspection, particularly when the state’s system is relatively old and needs to be updated. For example, Delaware’s system was designed in 1995 to help staff investigate child abuse, but was being used in 2003 to track the licensing process. According to state officials, the system was difficult to use for management purposes. To upgrade the system to adopt promising practices such as entering data while conducting on-site inspections, the state would need to develop the system and obtain additional equipment, such as laptop computers. However, officials told us that despite the increases in the state licensing budget since 1999, the state does not have the funding to upgrade the system and purchase additional technological equipment because these funds had been for specific purposes, such as staffing and infant and toddler programs. Florida, North Carolina, and Oklahoma have implemented rating systems to tie the level of reimbursement to a provider’s quality. These rating systems helped create a market system by giving parents more information about the quality of child care providers, and offered incentives to providers to obtain higher levels of education and improve quality in their facilities. In implementing rating systems, states set standards for different tiers of quality and assessed providers against these standards. The lowest tier included those providers who only met the state licensing criteria, while the highest tiers in Florida and Oklahoma included those providers who have achieved accreditation by national organizations. States differ in the number of tiers in their systems: North Carolina has five, Oklahoma has four, and Florida has two tiers. Ratings systems have helped parents choose child care providers by allowing them to compare the quality of different providers. While some parents might not have known whether national accreditation would mean higher-quality care, they can easily identify that a provider with five stars was considered to provide better quality than a provider with one star. Rating systems also offered providers incentives to improve and maintain the quality of their facilities by offering higher levels of reimbursement to higher-quality providers. For example, through the star rating systems in North Carolina and Oklahoma, providers with more stars received a higher rate of reimbursement than providers with one star. Similarly, in Florida high-quality providers also receive financial incentives such as a reimbursement rate for subsidized children that is 20 percent higher than the market rate, a tax break for high-quality providers whose clientele does not include subsidized families, and an exemption from sales tax on educational materials. Officials in Oklahoma said that the rating system has provided incentives to all providers to improve the quality of their care, even providers who do not serve subsidized children. Such providers use the star rating as a marketing tool for their facilities, according to state officials. Finally, the rating systems—and the standards and indicators of quality on which they are based—can help states focus their child care quality efforts. For example, North Carolina’s five-star rating system forms the core of all its child care quality efforts, according to state officials. The Stars program provided the criteria for identifying areas in which individual centers needed to improve and the steps centers could take to obtain a higher rating, the basis for developing and providing full-day professional development and workshops throughout the year, and links with financial supports to encourage and reward participation in professional development that could lead to higher-quality child care. Training is an integral part of ensuring and upgrading the quality of early childhood education for all the states we visited. We found promising practices in training and information programs for providers and parents. Training and information for providers includes information on starting up a facility and program standards that may be available in information packages or on the Internet; orientation training; technical assistance; staff development courses; conferences, and model observation training sites. Parent information services includes providing information on what to look for when evaluating child care providers, as well as related information on vaccinations, screening and playground safety and where to access parent support services. Delaware, North Carolina, and Oklahoma were cited by experts as having notable training programs. What they have in common is that each training program is part of a total statewide system of interconnected partnerships between state and community agencies to ensure quality early childhood care. In these three states, licensing officials offered prelicensing training so that potential providers could get information on state requirements to avoid being out of compliance. In Delaware, staff from the Family & Workplace Connection, which is the umbrella organization that provides both training and resource and referral (R&R) services and offered a wide range of training opportunities that included some technical assistance, also attended new provider orientations given by the licensing office, to meet these providers and inform them of other services they supply, like the food program, which can furnish free food to providers, or grant opportunities. According to one official, these supports helped providers understand the health and safety requirements, which in turn helped facilitate compliance. In Delaware, inspectors also walked new providers through the health and safety regulations and addressed providers’ questions or concerns on their first walk-through of the facility. The bulk of training for providers was provided by training organizations—community colleges, early education training centers, some R&R agencies, and universities—working in partnership with the licensing offices for ongoing staff development. Such training is designed to help providers meet ongoing training requirements and provide them with technical assistance so they can improve the quality of their care by engaging in training programs aligned with state standards and requirements. Oklahoma, North Carolina, and Delaware pursued different strategies to implement this alignment, and Florida demonstrated how a comprehensive Internet information system saves time for providers, training organizations, and state officials. Oklahoma’s three-tiered training program comprehensively addresses all training needs from meeting minimal requirements for child care licensing to the most advanced professional development for early care and education workers and center directors. Tier I is short-term, job-related training that can be counted toward ongoing training requirements. Providers can pursue Tier I training by attending workshops or conferences related to their job, watching videos, and self-directed reading, and by participating in in-service training. Tier II is in-depth training that providers engage in by completing courses approved by the state’s Center for Early Childhood Professional Development. Tier III training is advanced training, formal education through credit-bearing courses at accredited colleges, universities, and technology centers that transfer for credit to such schools, leading to the highest levels of the quality enhancement star system, the career development ladder, and the Oklahoma director’s credential. North Carolina has woven its provider training into its comprehensive Smart Start program, which is a recognized national model of partnerships working together to meet the needs of young children. As part of this program, the education level of staff is one of three areas on which providers are evaluated when their quality rating is determined. Providers who meet higher standards for education and experience receive more stars because, according to state officials, child care teachers with more early childhood education and experience are prepared to provide children with a more enriching child care experience. Providers can improve their star rating by increasing staff education and experience levels and by employing more teachers with early childhood education credentials and experience. Providers can pursue such credentials by completing certificate or higher education programs offered at community colleges and universities. Such training activities are facilitated and coordinated by the North Carolina Institute for Early Childhood Professional Development. Delaware First, the career development program for early childhood professionals that is operated by the Family & Workplace Connection, has developed Core Curricula—Basic, Advanced, and Specialty—that can be used to fulfill training requirements and also to pursue a Child Development Associate degree or college credit. Florida and Delaware provided training information and administrative functions through the Internet. This has saved time for providers, inspectors, and training organizations by allowing providers to register online for courses offered at community colleges or other organizations offering child care training. In addition, providers and inspectors have been able to access providers’ transcripts online. State officials say that this program has saved the state time and money, although no studies have been done to quantify these savings. Other state officials have noted that using an Internet system has promoted quality in child care by making the process transparent and easily accessible. In Florida, for example, families seeking quality child care can see the latest the inspection reports on a facility because they are immediately posted on the Internet. According to child care officials and the National Association for the Education of Young Children (NAEYC), training opportunities are maximized and retention rates for child care workers increased when rewards for training and higher quality child care are linked. For example, both North Carolina and Oklahoma offer incentives to providers to encourage them to pursue training. These incentives are offered through a program for college scholarships funded by state, federal, and private dollars (T.E.A.C.H. Early Childhood). Both North Carolina and Oklahoma have also adopted a salary supplement program called WAGE$ to reward increased education attainment. Parent training is provided by R&R organizations in each state, which provide information on the Internet and by telephone, as well as through written brochures, fact sheets, and newsletters. For example, Florida’s R&R provides parents a database listing all legally operating child care and early childhood providers, available options for care, indicators of quality to look for when considering a provider, and what to consider when checking on a child’s placement, that is, determining how well the placement will meet a child’s needs. In Delaware, the R&R offered workshops throughout the year to parents on over 105 topics pertaining to children up to school age. In all four states we visited, the R&R agency provided a system to help parents locate child care—often known as a child care locator—as well as information about how to choose quality care and what training opportunities were available for parents. One expert we interviewed told us that it is important to educate parents because they are the ones who visit the child care facility every day and are most effective in helping to ensure provider compliance with state child care standards. By expanding who is covered by licensing requirements and parental knowledge, states have increased oversight on child care providers since 1999. However, some states have decreased the number of inspections per facility per year. At the same time, states are maintaining their flexibility in administering CCDF and providing parents with choice in the child care options available to them. States use inspections to oversee the child care provided in facilities with the greatest number of children and use less intensive methods like self-certification for other facilities they regulate. Self-certification affords less assurance that providers are meeting safety and health requirements, but in an era when caseload sizes already exceed recommended levels in many states and states find themselves challenged by increasing child care demand, it helps keep low-cost child care available. The use of technology has also increased in the states for state licensing staff, providers, and parents. Not only have the number of states using technology in the areas associated with state activities in new areas, providing information and training resources for parents licensing and enforcement increased, but technology has also expanded into new areas, providing information and training resources for parents, providers, and training organizations on a broad range of topics for a number of different functions. The obvious advantages of an up-to-date, Internet-based, integrated information system have been demonstrated by Florida. However, the size of the investment required for this type of system depends on whether or not the state chooses to create an independent licensing information system, or if the state wants its licensing system to be part of a larger statewide child care information system. Florida found it more affordable to do the former. We provided officials of the Administration for Children and Families (ACF), HHS, an opportunity to comment on a draft of this report. ACF was pleased with the findings and noted that the report will be especially helpful to ACF and the states because it compares state child care health and safety enforcement in 1999 and 2003. ACF’s comments are reproduced in appendix VI. ACF also supplied technical comments that were incorporated as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time we will send copies of this report to the Honorable Wade F. Horn, Assistant Secretary, Administration for Children and Families, HHS; appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http//:www.gao.gov. If you or your staff have any questions or wish to discuss this material further, please call me on (202) 512-7215. To perform our work, we conducted a mail survey in 2004 of state licensing officials in all 50 states and the District of Columbia about their licensing and enforcement policies and practices in 2003. Specifically, we asked states to report on the frequency of their compliance inspections, background checks, training programs and educational requirements for licensing staff, and caseload sizes. We achieved a response rate of 98 percent, with 49 states and the District of Columbia responding to our survey. Maine did not respond. In our 1999 survey, we achieved a 100 percent response rate. We compared our survey results with a mail survey we conducted in 1999 to identify any changes that had occurred within the past few years. Of the 30 substantive questions on our 2004 survey, 17 were identical to those in the 1999 survey, 1 was a version of another 1999 question, and 12 were new questions. To ensure their reliability, we pretested all new questions with current or former state licensing officials. Rather than using our survey data for the number of states exempting family child care providers’ regulation, we used data from the National Child Care Information Center (NCCIC). The responses to question 16 of our survey (on the number of states exempting some family child care homes from state regulation) proved problematic, as it had in the 1999 survey. Callbacks to a sample of respondents whose survey responses differed from data produced by NCCIC showed that respondents misunderstood our question. The NCCIC information was based on published state standards of the regulation threshold point for the number of children served rather than survey data. Therefore we used the NCCIC data in this report to answer this question. Since this was similar to what we had done to compensate for the problems in responses to that question in the 1999 survey, the data for the 2 years are comparable. In addition, we conducted a literature search and interviewed child care licensing experts and state and federal officials to gather information about critical licensing and enforcement activities occurring within the states. The experts we interviewed included university research professors, representatives from public policy organizations, staff from organizations that deal with the policy and practice of providing child care services. With the information we gathered through these expert interviews, we identified and conducted site visits in four states—Delaware, Florida, North Carolina, and Oklahoma—to provide examples of promising practices in licensing and enforcement. We conducted our work between October 2003 and July 2004 in accordance with generally accepted government auditing standards. Data were not available. State that did not respond to this survey. Iowa only provided caseload data for centers. Most state licensing offices reported increases in budgets and the number of full-time equivalent (FTE) staff since1999, although an increase in one did not necessarily reflect an increase in the other. Some states could not provide us with FTE or budget information for particular years. Cannot express as a percentage change because 1999 was zero. Maine did not respond to the survey. Does the state conduct renewal visits? Does the state conduct renewal visits? Does the state conduct renewal visits? Does the state conduct renewal visits? Does the state conduct renewal visits? Does the state conduct renewal visits? Not inspected on a regular basis At least twice a year At least twice a year At least twice a year At least twice a year Less often than once every 2 years At least twice a year At least twice a year At least twice a year At least twice a year Family day care is provided by an individual provider in a private residence other than the child’s home. Group homes provide care by two or more providers in a private residence other than the child’s home. Centers are nonresidential facilities that provide care for children and include full- and part-time group programs, such as nursery and preschool programs. Maine did not respond to our survey. In addition to the person named above, Margaret Armen, Amy Buck, Patricia Bundy, Kara Kramer, Jerry Sandau, and Jay Smale also made major contributions to this report. Child Care: Recent State Policy Changes Affecting the Availability of Assistance for Low-Income Families. GAO-03-588, May 5, 2003. Child Care: States Exercise Flexibility in Setting Reimbursement Rates and Providing Access for Low-Income Children. GAO-02-894, September 18, 2002. Child Care: States Have Undertaken a Variety of Quality Improvement Initiatives, but More Evaluations of Effectiveness Are Needed. GAO-02-897, September 6, 2002. Child Care: States Increased Spending on Low-Income Families. GAO-01-293, February 2, 2001. Child Care: State Requirements for Background Checks. GAO/HEHS-00-66R, February 28, 2000. Child Care: State Efforts to Enforce Safety and Health Requirements. GAO/HEHS-00-28, January 24, 2000. Child Care: How Do Military and Civilian Center Costs Compare? GAO/HEHS-00-7, October 14, 1999. Welfare Reform: States’ Efforts to Expand Child Care Programs. GAO/HEHS-98-27, January 13, 1998.
The federal government requires states that receive funds from the Child Care and Development Fund to establish basic health and safety requirements. The federal government also requires states receiving federal funds for child care to have procedures in place to ensure that providers being paid with grant dollars comply with the applicable safety and health requirements. Because of the significant federal role in paying for child care services and congressional concerns about the way in which states ensure the safety and health of children in child care settings, we were asked to follow up on our prior report, Child Care: State Efforts to Enforce Safety and Health Requirements (GAO/HEHS-00-28, Jan. 24, 2000). This report (1) identifies changes in states' licensing and enforcement activities for various types of licensed and nonlicensed providers since 1999, (2) describes the ways child care licensing agencies organize inspection staff and use technology, and (3) provides examples of promising practices in state child care licensing and enforcement activities. To obtain data, we surveyed state licensing officials in 2004 about their 2003 activities, interviewed experts and made site visits to four states--Delaware, Florida, North Carolina and Oklahoma. State efforts in the licensing and oversight of child care facilities are generally about the same as they were in 1999, except that the median caseload of the number of facilities per inspector dropped from 118 to 110 in 2003. We found that in 2003, 38 states exempted all family child care providers from being regulated, compared with 39 states in 1999. Most states conducted compliance inspections at least once a year, meeting or exceeding the recommended level for all types of providers. Many states organized inspection staff by geography and used technology in many parts of the inspection process. Forty-three states reported assigning staff to geographic locations throughout the state. States also assigned staff based on specific job task and type of child care facility the staff would inspect. Many states used multiple criteria to assign staff. Forty-five states reported using technology to assist them with many aspects of licensing and enforcement functions, such as maintaining statistics on families and providers. States have adopted a number of promising practices to assist their child care licensing and enforcement activities. These practices include the use of technology to streamline licensing and enforcement processes and manage parent and provider information, rating systems to aid parents in selecting the appropriate child care for their child and to offer providers incentives to improve and maintain the quality of their care, and working with other organizations to train providers and parents.
To advance CMS’s efforts to prevent fraud, waste, and abuse in the Medicare fee-for service program, the Small Business Jobs Act of 2010 appropriated $100 million for CMS to implement a data analytic system. The law required CMS to implement a system that could analyze claims prior to payment to identify suspect claims and provider billing patterns and prevent payment of improper and potentially fraudulent claims, among other things. In April 2011, CMS awarded almost $77 million to a contractor to implement, operate, and maintain FPS and design analytic models for the system. CMS awarded about $13 million to a second contractor in July 2011 to develop additional analytic models for FPS. As the original FPS contract was set to end, CMS awarded a nearly $92 million contract in April 2016 for a new, upgraded FPS system—FPS 2.0. FPS 2.0 was fully implemented in March 2017. CMS’s Center for Program Integrity (CPI)—which oversees the agency’s Medicare program integrity efforts—employs FPS as a key component of its strategy to move beyond the “pay and chase” approach of recovering improper and potentially fraudulent payments to focusing on prevention. FPS screens fee-for-service claims prior to payment in order to help identify and prevent improper and potentially fraudulent payments by performing two primary functions: Develop leads for fraud investigations. FPS compares provider billing patterns and other data against models of potentially fraudulent behavior to identify providers with suspect billing patterns. For example, an FPS model identifies providers that bill for a disproportionate number of services in a single day relative to other providers. FPS simultaneously risk-scores providers identified by the models to prioritize them for potential investigation. In developing these leads, FPS is intended to help CMS prevent potentially fraudulent payments by furthering the agency’s ability to more quickly identify and investigate suspect providers, and take more timely corrective actions. Execute automated prepayment edits. FPS edits deny certain improper payments, and some edits compare information from multiple claims to do so. For example, FPS may deny physician outpatient claims based on information from an inpatient claim associated with the same episode of care. CMS submitted three annual reports on FPS’s implementation to Congress in response to requirements established by the Small Business Jobs Act of 2010. In these reports, CMS provided information on the corrective actions taken and savings achieved from FPS. In its most recent report, CMS reported that FPS had cumulatively helped prevent or identify nearly $1.5 billion in improper and potentially fraudulent payments from its implementation through the end of calendar year 2015. CMS uses contractors to support the agency’s program integrity activities, including program integrity contractors to identify and investigate providers engaged in potential Medicare fee-for-service fraud. CMS is currently in the process of transitioning Medicare program integrity contracts from ZPICs to new contract entities, UPICs. ZPICs operated in seven geographical jurisdictions across the country. UPICs will operate in five jurisdictions and combine Medicare and Medicaid program integrity efforts under a single contracting entity (fig. 1 depicts the geographic jurisdictions of ZPIC and UPIC zones). As of May 2017, two of the five UPICs—the Midwestern and Northeastern—were operational. The program integrity contractors identify leads for provider investigations from three categories of sources: Referrals. A number of entities, including CMS, law enforcement agencies, and the MACs, refer leads about suspect providers to the program integrity contractors. The program integrity contractors also receive leads based on beneficiary and provider complaints and allegations. Program integrity contractor data analysis. Program integrity contractors use postpayment claims to conduct their own data analyses to identify providers with suspect billing patterns. FPS. FPS identifies providers with suspect billing patterns and prioritizes leads based on provider risk-scores. The program integrity contractors generally have a triage process to review leads and determine whether the leads are indicative of potential fraud (see fig. 2 for information on program integrity contractor investigation processes). Leads that are determined to be suspect become formal investigations, and the program integrity contractors perform a range of investigative activities to gather evidence and determine if providers are engaged in potential fraud. These activities include conducting beneficiary and provider interviews, site visits of provider facilities, and manual reviews of provider claims. Based on their investigations, the program integrity contractors may take corrective actions by referring providers engaged in potential fraud to law enforcement and initiating administrative actions. Specifically, if the program integrity contractors uncover evidence of potential fraud, they refer the investigation to the Department of Health and Human Services Office of Inspector General (HHS OIG) for further examination, which may lead to possible criminal or civil prosecution by the Department of Justice. The program integrity contractors may also recommend a range of administrative actions to CMS for approval and implementation. Such actions include revocation of providers’ billing privileges and payment suspensions (table 1 describes the administrative actions the program integrity contractors may recommend against providers). CMS’s claims processing systems apply prepayment edits to all Medicare fee-for-service claims in an effort to pay claims properly. Most of the prepayment edits are automated, meaning that if a claim does not meet the criteria of the edit, it is automatically denied. Other prepayment edits flag claims for manual review, in which trained clinicians and coders examine claims and associated medical records to ensure that the claims meet Medicare rules and requirements. Many improper and potentially fraudulent claims can be identified only by manually reviewing associated medical records and beneficiary claim histories, and exercising clinical judgment to determine whether services were reasonable and necessary. Whereas automated edits are applied to all claims, manual edits are applied to very few—less than 1 percent of claims undergo manual review. CMS contracts with the MACs to process and pay Medicare fee-for- service claims and implement prepayment edits in the Medicare claims processing systems. The claims processing systems consist of three systems—the MAC front-end systems, shared systems, and Common Working File—that carry out a variety of functions and execute prepayment edits (see fig. 3). When implementing FPS, CMS integrated FPS with the claims processing systems and claims are screened by FPS prior to payment. Unlike the claims processing systems, CPI maintains FPS. HFPP is a voluntary public-private partnership established by HHS and the Department of Justice to facilitate collaboration in addressing healthcare fraud. The membership includes Medicare- and Medicaid- related federal agencies and several state agencies, other federal agencies with responsibility for federal health care programs such as the Department of Defense and Department of Veterans Affairs, law enforcement agencies, private payers, and antifraud and other healthcare organizations. HFPP was established, in part, to help payers identify schemes and providers engaged in potential fraud that individual payers may not be able to identify alone. HFPP began in 2012 with 20 members and, as of June 2017, had grown to 79 members. As of the end of calendar year 2016, CMS had cumulatively spent $30.3 million on HFPP. ZPIC officials stated that FPS helps them identify suspect providers quickly. Because FPS analyzes claims prior to payment, providers with suspect billing patterns can be identified quickly relative to other sources of leads. In particular, several ZPIC officials stated that the leads they develop from their data analyses of postpayment claims are not as timely. Officials from two ZPICs estimated that the postpayment claims they use for their analyses may have been for services rendered 1 to 2 months prior, while the claims analyzed by FPS may have been for services recently rendered. ZPIC officials also said the information associated with FPS leads allows them to examine and triage those leads quickly to determine whether to initiate investigations. FPS leads provide specific information about the type of potential fraud identified, along with claims data and other supporting information. ZPIC officials further stated that they use information from FPS when triaging leads from other sources. In contrast to FPS leads, several ZPIC officials noted that reviewing and triaging leads based on referrals often necessitates additional time and resources. In particular, allegations associated with some referrals can be vague, which makes it difficult for ZPICs to identify the relevant provider claims data and other information needed to assess the validity of the allegations. However, once an investigation is initiated, officials stated that FPS has generally not sped up the process for investigating providers. Several ZPIC officials noted that investigations based on FPS leads are similar to those from other sources in that they require further investigation, such as manual claim reviews or site visits of provider facilities, to substantiate the leads and gather evidence of potential fraud. However, while ZPIC officials said that FPS does not speed up investigations, officials from several ZPICs noted that FPS can help improve the quality of beneficiary interviews. Since FPS leads are based on prepayment claims data, ZPICs can conduct beneficiary interviews shortly after the services have been rendered, when beneficiaries may be better able to recall details about their care. CMS has not tracked data to assess FPS’s effect on the timeliness of investigation processes. CMS has lacked such timeliness data because of limitations with its IT system for managing and overseeing ZPICs. However, as of May 2017, CMS was in the process of implementing a new IT system that could be used to assess FPS’s effect on the timeliness of program integrity contractor investigation processes. In transitioning to UPICs, CMS is implementing a new contractor workload management system that will capture data on the timeliness of UPIC investigation processes. For example, the system will be able to capture information on the amount of time it takes a UPIC to evaluate a lead or conduct an investigation. CMS officials said that the agency plans to use the information tracked by the system to monitor program performance, including assessing FPS’s effect on UPIC investigation processes and the timeliness of corrective actions. The officials also stated that they may not be able to conduct such an assessment for several years as CMS is still in the process of transitioning to UPICs and implementing the new IT system. Further, the officials said that they subsequently would want to collect several years’ worth of such data to ensure a reliable assessment. In fiscal years 2015 and 2016, about 20 percent of ZPIC investigations were initiated based on FPS leads, according to our analysis (see table 2). In both years, nearly half of ZPIC investigations were based on referrals. The proportion of investigations based on FPS leads is poised to increase as CMS changes program integrity contractor requirements for using FPS with the transition from ZPICs to UPICs. CMS has required the ZPICs to review all FPS leads that met high-risk thresholds. CMS is instead requiring that the UPICs derive 45 percent of new investigations from FPS. ZPIC officials stated that the new UPIC requirement should allow UPICs flexibility to focus their reviews on the FPS leads that are most applicable to their geographic region. For example, a UPIC with high levels of home health agency fraud within its jurisdiction can focus its reviews of FPS leads on those providers. For total actions, CMS tracked the number of investigations referred to law enforcement. For actions associated with FPS, CMS tracked the number of providers referred to law enforcement. These data are not directly comparable. In addition to tracking the corrective actions and savings associated with FPS, CMS also measures the extent to which investigations initiated from FPS leads result in actions against providers engaged in potential fraud. CMS reported that, in fiscal year 2015, 44 percent of FPS-initiated investigations resulted in administrative actions, which met the agency’s fiscal year goal of 42 percent of investigations leading to administrative actions. In fiscal year 2016, 38 percent of FPS-initiated investigations resulted in administrative actions, which did not meet the agency’s fiscal year goal of 45 percent. FPS prepayment edits screen individual claims to automatically deny payments that violate Medicare rules or policies. For example, some FPS edits deny claims that exceed coverage utilization limits for a service. FPS edits do not analyze individual claims to automatically deny payments based on risk alone or the likelihood that they are fraudulent. According to CMS officials, the agency does not have the authority to use FPS to automatically deny individual claims based on risk without further evidence confirming that the claims are potentially fraudulent. Although the prepayment edits in FPS are functionally similar to those in CMS’s claims processing systems, the FPS edits specifically target payments associated with potential fraud schemes. Like edits executed elsewhere in the claims processing systems, FPS edits deny payments based on rules or policies. Unlike the edits in the claims processing systems, all of the edits in FPS are designed to address identified payment vulnerabilities associated with potential fraud, according to CMS officials. Payment vulnerabilities are service- or system-specific weaknesses that can lead to improper payments, including improper payments that may be due to fraud. For example, CMS implemented an FPS edit that denies physician claims that improperly increase payments by misidentifying the location that the service was rendered. The payments are denied based on the rule that physician claims must correctly identify the place of service. The edit helped address a payment vulnerability identified by HHS OIG that found millions of dollars in overpayments. According to CMS officials, the advantage of using FPS to implement prepayment edits is that the system allows CMS to prioritize edits intended to address payment vulnerabilities associated with potential fraud. Because CPI maintains FPS, CMS can quickly implement edits into FPS. In contrast, edits that are implemented in the claims processing systems are queued as part of quarterly system updates, and may need to compete with other claims processing system updates. CPI is not subject to such limitations when implementing edits in FPS, and officials said that edits can be developed and implemented in FPS more quickly compared to the claims processing systems. As of May 2017, CMS had implemented 24 edits in FPS. CMS reported that in fiscal year 2015, FPS edits denied nearly 169,000 claims and saved $11.3 million. In fiscal year 2016, the edits denied nearly 324,000 claims and saved $20.4 million. CMS officials stated that the number of prepayment edits implemented in FPS thus far has been limited, but that the agency is taking steps to address certain challenges that would allow the agency to develop and implement edits more quickly. For example, because of their role and expertise in processing claims, the MACs advise CPI and help develop and test FPS edits before they are implemented in the system to ensure they will work as intended. However, CPI has been limited in the amount of MAC resources that it can engage to help develop FPS edits under existing contracts. According to officials, CPI is planning to take steps to more directly involve the MACs in FPS edit development, which officials said should accelerate the edit implementation process. Additionally, CMS officials said that they plan to utilize FPS functionality to implement new edits by expanding existing edits to apply to other services. All of the 24 current FPS edits were developed from the ground up, a time and resource consuming process, according to CMS officials. In contrast, developing new edits by expanding existing edits will allow CMS to more quickly develop and implement new edits. HFPP participants we interviewed, including CMS officials, reported that sharing data and information within HFPP has been useful to their efforts to address health care fraud. The principal activity of HFPP is generating studies that pool and analyze multiple payers’ claims data to identify providers with patterns of suspect billing across multiple payers. Study topics examine known fraud vulnerabilities important to the participating payers and are selected through a collaborative process. As an example, one study used pooled data to identify providers who were cumulatively billing multiple payers for more services than could reasonably be rendered in a single day. In another study, HFPP pooled payer information on billing codes that are frequently misused by providers engaged in potential fraud, such as codes commonly used to misrepresent non-covered services as covered. See table 4 for a description of HFPP’s completed studies as of May 2017. Participants reported that HFPP’s studies helped them to identify and take action against potentially fraudulent providers that would otherwise have gone unidentified. For instance, both public and private payers reported that HFPP’s non-operational providers report uncovered providers that they had not previously identified as suspect. CMS officials and one private payer we interviewed said that they used information from this study to conduct site visits of reportedly non-operational providers. CMS officials told us that they revoked a number of the providers after confirming that they were indeed non-operational. CMS officials also said that they review the results of HFPP studies and provide information on potentially fraudulent providers to ZPICs when appropriate. The information may either serve as new leads or help support existing investigations. Participants also reported that study results have helped them uncover payment vulnerabilities of which they might not otherwise have been aware. For example, CMS officials stated that they used the HFFP report on misused procedure codes to evaluate several Medicare payment vulnerabilities and then implemented edits to address them. In instances where participants reported that HFPP studies revealed suspect providers or schemes that were known to them, participants stated that HFPP study results helped them to confirm suspicions, better assess potential exposure, and prioritize and develop internal investigations. Several participants we interviewed noted that even though HFPP study results can help them identify suspect providers, they may still face challenges using the information to take corrective actions. HFPP participation rules require payers to examine their internal data and claims to investigate and build cases against suspect providers before taking any corrective actions, partly in order to minimize the risk of payers taking action on false positive study results. For certain types of fraud schemes, however, the participants’ internal information alone may not provide enough evidence of improper billing. For instance, although an HFPP study may reveal clear evidence that a provider is billing multiple payers for an unreasonable number of services in a single day, the provider may have only billed individual payers for a limited, reasonable number of services. Participants reported that HFPP has also facilitated both formal and informal information sharing among payers, and indicated that it has helped them learn about fraud vulnerabilities and strategies for effectively addressing them. Formal information sharing includes presentations at HFPP meetings and a whitepaper on how payers can help address beneficiary opioid abuse and reduce opioid-related fraud. HFPP also manages a web portal where participants can share individual best practices and post “fraud alerts” about emerging fraud schemes or suspect providers. Informal information sharing includes knowledge exchanged through the networking and collaboration that occurs among HFPP participants, both at in-person HFPP meetings and through collaboration that occurs via the web portal’s participant directory. Although HFPP began operations in 2012, participants we interviewed stated that much of the initial work of the partnership involved negotiating the logistics for collecting and storing participants’ claims data. CMS contracts with a trusted third party (TTP) entity to administer HFPP. The TTP consolidates, secures, and confidentially maintains the claims data shared by participants, and conducts studies that analyze the pooled data to identify potential fraud across payers. According to several participants we interviewed, some payers were initially reluctant to share claims data with the TTP because claims contain sensitive provider and beneficiary information and private payers may view them as proprietary. Accordingly, it took time for the TTP to demonstrate to payers its ability to securely store and use pooled claims data. Payers’ reluctance resulted in an early time- and resource-intensive data sharing strategy that relied upon payers submitting a limited amount of claims data on a study-by- study basis, in a particular format, stripped of beneficiaries’ personally identifiable information and protected health information. Recently, HFPP began to pursue a new data sharing strategy. According to the TTP and participants we interviewed, payers will send in generalized data, reducing the data sharing burden on payers and enabling HFPP to conduct new types of studies to combat fraud. The data can be submitted in various formats, relieving payers from the need to extract and clean study-specific data. All participant data will be pooled and stored, and multiple studies will be run on the data submitted. Payers may voluntarily submit data that includes beneficiaries’ personally identifiable information and protected health information. According to CMS officials, collection of personally identifiable information and protected health information will allow HFPP to conduct studies that involve identifying beneficiaries across payers, such as studies examining fraud schemes in which multiple providers fraudulently bill for the same beneficiaries. Several HFPP participants we spoke with indicated their support of the new strategy and willingness to provide beneficiaries’ personally identifiable information and protected health information for more in-depth HFPP studies. As of May 2017, 38 partners had signed data sharing agreements with the new TTP. However, not all payers that previously shared claims data have agreed to participate in the new data sharing strategy and those payers are still working with the TTP to formalize agreements regarding how their claims data will be stored and used. GAO provided a draft of this report to HHS. HHS provided technical comments, which GAO incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional requesters, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix I. In addition to the contact named above, Martin T. Gahart (Assistant Director), Michael Erhardt (Analyst-in-Charge), Muriel Brown, Cathleen Hamann, Colbie Holderness, and Jennifer Whitworth made key contributions to this report.
CMS analyzes Medicare fee-for-service claims data to further its program integrity activities. In 2011, CMS implemented a data analytic system called FPS to develop leads for fraud investigations conducted by CMS program integrity contractors and to deny improper payments. In developing leads, FPS is intended to help CMS avoid improper payment costs by enabling quicker investigations and more timely corrective actions. Additionally, in 2012, CMS helped establish the HFPP to collaborate with other health care payers to address health care fraud. One of the key activities of the HFPP is to analyze claims data that are pooled from multiple payers, including private payers and Medicare. GAO was asked to review CMS's use of FPS and the activities of the HFPP. This report examines 1) CMS's use of FPS to identify and investigate providers suspected of potential fraud, 2) the types of payments that have been denied by FPS, and 3) HFPP efforts to further CMS's and payers' ability to address health care fraud. GAO reviewed CMS documents, including reports to Congress on FPS, contractor statements of work, and information technology system user guides, and obtained fiscal year 2015 and 2016 data on FPS fraud investigations and claim denials. GAO also interviewed CMS officials and CMS program integrity contractors regarding how they use FPS, and a non-generalizable selection of HFPP participants regarding information and data sharing practices, and anti-fraud collaboration efforts. Investigations initiated or supported by the Centers for Medicare & Medicaid Services' (CMS) Fraud Prevention System (FPS)—a data analytic system—led to corrective actions against providers and generated savings. For example, in fiscal year 2016, CMS reported that 90 providers had their payments suspended because of investigations initiated or supported by FPS, which resulted in an estimated $6.7 million in savings. In fiscal year 2016, 22 percent of Medicare fraud investigations conducted by CMS program integrity contractors were based on leads generated by FPS analysis of Medicare claims data. Officials representing Medicare's program integrity contractors told GAO that FPS helps speed up certain investigation processes, such as identifying and triaging suspect providers for investigation. However, the officials said that once an investigation is initiated, FPS has generally not sped up the process for investigating and gathering evidence against suspect providers. CMS has not tracked data to assess the extent to which FPS has affected the timeliness of contractor investigation processes. However, CMS is implementing a new information technology system that tracks such data, and officials said that they plan to use the data to assess FPS's effect on timeliness. FPS denies individual claims for payment that violate Medicare rules or policies through prepayment edits—automated controls that compare claims against Medicare requirements in order to approve or deny claims. FPS prepayment edits specifically target payments associated with potential fraud. For example, an FPS edit denies physician claims that improperly increase payments by misidentifying the place that the service was rendered, which helped address a payment vulnerability associated with millions in overpayments. FPS edits do not analyze individual claims to automatically deny them based on risk alone or the likelihood that they are fraudulent without further investigation. As of May 2017, CMS had implemented 24 edits in FPS. CMS reported that FPS edits denied nearly 324,000 claims and saved more than $20.4 million in fiscal year 2016. The Healthcare Fraud Prevention Partnership (HFPP) is a public-private partnership that began in 2012 with the aim of facilitating collaboration among health care payers to address health care fraud. The HFPP had 79 participants as of June 2017. Participants, including CMS officials, stated that sharing data and information within HFPP has been useful to their efforts to address health care fraud. HFPP conducts studies that pool and analyze multiple payers' claims data to identify providers with patterns of suspect billing across payers. Participants reported that HFPP's studies helped them to identify and take action against potentially fraudulent providers and payment vulnerabilities of which they might not otherwise have been aware. For example, one study identified providers who were cumulatively billing multiple payers for more services than could reasonably be rendered in a single day. Participants also stated that HFPP has fostered both formal and informal information sharing among payers. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
The human health risks posed by any given type of radioactive material depend on its activity level, or intensity; how long exposure lasts; and the way in which the body is exposed to it—via inhalation, ingestion, or external exposure. The different types of radiation—including alpha, beta, gamma, and neutron—vary in how easy or difficult they are to block or shield, which, in turn, affects the health threat posed by a particular type of radiation. Depending on the radioactive material’s intensity and the length and manner of exposure to it, health effects range from death, to severe injury, to the development of cancer, to no discernable damage. For example, alpha radiation poses little threat to human health from external exposure, but poses considerable health risks if inhaled or ingested. Gamma radiation is more penetrating and, if not properly shielded, can cause injury or death through external exposure. Neutron radiation, like gamma radiation, if not shielded, can also cause injury or death through external exposure. Although sources of neutron radiation are less common, neutron radiation is emitted from some materials that are used to make nuclear weapons. NRC oversees licensees of radiological material through three regional offices located in Illinois, Pennsylvania, and Texas; radiological material licensing responsibilities for the region II office in Georgia are handled by the region I office in Pennsylvania. NRC has relinquished regulatory authority for licensing and regulating certain radioactive material to 37 states that have entered into an agreement with NRC (agreement states). Figure 1 shows which states are agreement states and in which states NRC has maintained all regulatory authority (NRC states). NRC and agreement states issue two types of licenses authorizing the possession and use of radioactive materials: specific licenses and general licenses. Specific licenses, which are the focus of this report, are issued for devices that contain relatively larger sealed radioactive sources. These devices, such as medical equipment used to treat cancer, cameras used for industrial radiography, and moisture and density gauges used in construction, generally require training to be used safely and may also need to be properly secured to avoid misuse. An organization or individual seeking to obtain a specific license must submit an application and gain the approval of either NRC or the appropriate agreement state prior to receiving and using licensed radioactive materials. According to NRC, of the approximately 21,000 specific radioactive materials licenses in the United States, NRC administers about 2,900, and agreement states administer the rest. Our prior work on security of radioactive materials found that NRC could do more to ensure the security of these materials. Specifically, we recommended in 2008 that among other things NRC take steps to develop and implement the systems it was then planning to better track, secure, and control radioactive materials. These systems are the National Source Tracking System (NSTS), the Web-based Licensing System (WBL), and the License Verification System (LVS). NSTS, deployed in January 2009, tracks category 1 and 2 sources of the 20 radionuclides that NRC determined are sufficiently attractive for use in an RDD or for other malicious purposes and warrant national tracking. NSTS is a transaction-based system that tracks each major step that each tracked radioactive source takes within the United States from “cradle to grave.” Licensees are responsible for reporting the manufacture, shipment, arrival, disassembly, and disposal of all nationally tracked sources. A nationally tracked source is a source containing a category 1 or 2 quantity of certain radioactive materials specified in NRC’s regulations. More specifically, NSTS includes the radionuclide, quantity (activity), manufacturer, manufacture date, model number, serial number, and site address. The licensee has until the close of the next business day after a transaction—such as the sale of a source from a vendor to a customer—takes place to enter it into the system. As a result, the location of all such sources are accounted for and closely tracked. While NSTS is presently configured to track larger and potentially more dangerous radioactive sources, NRC’s WBL—deployed in August 2012— provides quick access to up-to-date information on all NRC and four agreement states’ specific licenses for all radioactive materials and sources in all five IAEA categories, enabling the user to enter, maintain, track, and search radioactive material licensing and inspection information. WBL also includes pdf images of all paper copies of category 1 and 2 licenses for both NRC and agreement state licensees. NRC also developed a third system—LVS, deployed in May 2013—which draws on the information in NSTS and WBL and provides information to regulators and vendors and other would-be transferors on whether those applicants seeking to acquire category 1 and 2 sources are legitimately licensed to do so. This is particularly important because paper licenses issued by NRC and agreement states can be altered or counterfeited. LVS provides a means to mitigate the risks of using paper licenses. While NRC and agreement states have taken steps to improve their licensing programs and better ensure that radiological materials are safe and secure, concerns about the theft of radioactive materials and the possible consequences of a dirty bomb attack persist. In 2012, for example, we identified security weaknesses at some U.S. medical facilities that use high-risk radioactive materials, such as cesium-137, and in 2014, we found that challenges exist in reducing the security risks faced by licensees using high-risk radioactive materials for industrial purposes. NRC periodically evaluates NRC regional offices’ and agreement states’ programs for licensing radioactive materials through its Integrated Materials Performance Evaluation Program (IMPEP). NRC implemented IMPEP in 1995 to periodically review NRC regional office and agreement state radioactive materials programs to ensure that they are adequately protecting public health and safety from the potential hazards associated with the use of radioactive materials, and that agreement state programs are compatible with NRC’s program. As part of IMPEP, each NRC regional office and agreement state regulatory program is typically expected to undergo a program review every 4 years; reviews may occur more or less frequently depending on a program’s past performance. The IMPEP reviewers examine a regional office’s or an agreement state’s performance in areas such as licensing and inspections to determine if the regional office’s program is adequate to protect public health and safety and if the agreement state’s program is adequate to protect public health and safety and compatible with NRC requirements. NRC also has the option to employ greater oversight of agreement state programs if it discovers performance issues. Specifically, if performance problems are found, a Management Review Board (MRB)—comprised of NRC officials and an agreement state liaison—may decide to (in ascending order of seriousness) institute Monitoring, Heightened Oversight, Probation, Suspension, or Termination. The MRB may decide to place an agreement state on Monitoring if weaknesses in the program have resulted in or could result in less than satisfactory performance in one or more performance areas. If an agreement state program is found to have more serious problems (i.e., one or more performance indicators are found to be unsatisfactory), the MRB may opt to place a program on Heightened Oversight. Under Heightened Oversight, a program may be requested to submit a program improvement plan, which involves establishing a plan to address all recommendations to eliminate unsatisfactory performance as well as frequent contact with NRC to closely monitor progress. If the program under IMPEP review does not correct performance weaknesses under Heightened Oversight the MRB/NRC may place the program on Probation or even suspend or terminate the agreement and reassert regulatory authority. Probation is a formal process and requires approval of the Commission and notification of the state’s governor, congressional delegation, and public. NRC and agreement states have taken several steps since 2007 to help ensure that radioactive materials licenses are granted only to legitimate organizations and that licensees can only obtain such materials in quantities allowed by their licenses, but have not taken some measures for better controlling category 3 quantities of radioactive materials. In 2008, NRC developed revised screening criteria and a checklist covering all five IAEA categories of radioactive materials and now directs regions and agreement states to conduct prelicensing site visits for all unknown applicants. NRC and agreement states performed IMPEP reviews to assess whether licensing guidance was being met and took corrective actions when it was not. NRC also developed and deployed NSTS, WBL, and LVS to better control such materials, although these systems are focused on category 1 and 2 quantities. NRC does not require that category 3 quantities be tracked in NSTS nor does it require all category 3 licenses be included in WBL. LVS, which queries NSTS and WBL, provides information to regulators and vendors on whether a license is valid. By not including category 3 materials in NSTS nor most agreement state licenses in WBL, NRC has not taken an important step that could better track and control these materials. Further, including all category 3 materials in these systems could help address the risk that paper licenses issued by NRC and agreement states could be altered or counterfeited or that a licensee could obtain radioactive materials in quantities greater than what is allowed by their license. NRC has taken a number of steps to address the vulnerabilities in its licensing process identified by GAO and others. Specifically, in September 2007, NRC approved its Action Plan to respond to recommendations to address security issues its and agreement states’ radioactive materials programs raised in NRC Inspector General, Senate subcommittee, and GAO reports. NRC also established prelicensing and materials working groups and the Independent External Review Panel to assess the security of NRC and agreement programs and develop recommendations to address any vulnerabilities identified. Among the outcomes of the working groups and panel was the September 2008 issuance of revised prelicensing guidance, which, among other things, according to NRC officials suspended the “good faith presumption.” Prior to this change, NRC and agreement states were to maintain a good faith presumption that assumed that applicants and licensees did not have malicious intentions and that they would be honest and truthful in providing information to regulators. The revised guidance suspended this presumption, and directed regions and agreement states to conduct prelicensing site visits for all unknown applicants. Prior to June 2007, such visits were optional except in cases where the proposed use of radioactive materials involved unusually complex technical, safety, or unprecedented issues, or were otherwise judged to be high risk. The revised guidance directed NRC regions to conduct prelicensing site visits for unknown applicants for specific licenses starting in September 2008, and as a matter of compatibility for agreement states since March 2009. Taken together, according to NRC officials, the suspension of the presumption of good faith was intended to provide greater scrutiny of both license applications and prelicensing site visits for unknown applicants. In addition to suspending the good faith presumption for previously unknown applicants, NRC developed screening criteria to determine whether a prelicensing site visit should be conducted. Specifically, among other things, these criteria focus on whether the applicant may already have a license elsewhere with NRC or agreement states. If the applicant is known to NRC or an agreement state, a site visit may not need to be conducted. Nonetheless, some agreement states conduct prelicensing site visits for all applicants, regardless of whether they are known to NRC or other agreement states, according to NRC officials. According to NRC, the purpose of the site visit is to have a face-to-face meeting with the applicant to determine whether there is a basis for confidence that the sought radioactive materials will be used as represented in the application when the applicant receives the license. NRC also established a 14-point checklist to guide prelicensing site visits and developed a list of questions and activities related to the applicant’s business operations, facility, radiation safety operations, and personnel qualifications, to scrutinize the applicant and provide a basis for confidence that the applicant will use the radioactive material as specified in the license. In 2008, NRC also adopted revised prelicensing guidance. Under this guidance, according to NRC officials, for any specific license (category 1- 5) to be granted, unknown applicants must demonstrate during the prelicensing site visit that they are aware of, capable of, and committed to complying with all applicable (health, safety, and security) guidance before they take possession of licensed radioactive materials. In general, according to NRC officials, applicants must demonstrate that they are constructing facilities, establishing procedures, and have sufficient qualified staff to support the size and scope of the program described in the application. In addition, NRC officials told us that new applicants for category 1 and 2 quantities also undergo an on-site security review performed by NRC or agreement state officials. These security reviews verify that the applicant is prepared to implement the required security measures before the applicant takes possession of licensed radioactive materials, according to NRC officials. (On-site security reviews are not conducted for applicants for category 3-5 licenses.) According to NRC staff, those conducting on-site security reviews determine whether the applicant has the staff, processes, procedures, facilities, and equipment to be ready to comply with all applicable additional security requirements. NRC officials told us that they inspect each licensee for compliance with health, safety, and security requirements for all licenses (category 1-5) during an inspection after a licensee takes possession of the materials and that this inspection occurs within 12 months of the issuance of a new or amended license. NRC officials we spoke with, however, say that the initial postlicensing inspection may, and typically does, take place sooner. Through IMPEP reviews, NRC identified instances where agreement state programs did not follow NRC licensing guidance and took steps to ensure that corrective actions are taken. For example, according to NRC officials, from 2009 to 2013, IMPEP reviews found that three agreement state programs did not consistently apply the 2008 prelicensing guidance. As a result, NRC reminded all agreement state programs to follow prelicensing guidance, to ensure that the problem would not continue. According to NRC officials, NRC regional offices and agreement state agencies follow essentially the same guidance and procedures when reviewing license applications. NRC also took steps to improve IMPEP by, among other things, addressing program weaknesses. For example, in 2008, the NRC- chartered Materials Program Working Group recommended that NRC incorporate new security policies and foster an enhanced security culture as part of IMPEP reviews. In 2007, GAO recommended that NRC should conduct periodic oversight of the license application examiners to ensure that the new guidance is being appropriately applied. In response to this recommendation, NRC officials told us that they started working to incorporate enhanced security measures into the review process. For example, according to NRC officials, IMPEP review teams now evaluate programs on items such as their implementation of the prelicensing checklist, control of sensitive information, and amending of licenses to include new security requirements. In addition, the Commission directed NRC staff to develop options, among other things, to revise IMPEP metrics. According to NRC officials, the Commission approved the staff’s plan to improve IMPEP consistency by updating guidance and training, and the staff have started implementing plans to enhance the IMPEP process and expects these activities to be completed by the end of 2017. To help ensure that licensees can obtain radioactive materials only in quantities allowed by their licenses, NRC developed and deployed NSTS and WBL to track category 1 and 2 quantities of radioactive materials and record specific license information, respectively. It also deployed LVS, which queries WBL and NSTS, to better enable regulators, vendors, and other licensees to ensure that those seeking category 1 and 2 quantities of radioactive materials are properly licensed to do so. Specifically, prior to transferring category 1 and 2 quantities of radioactive materials, licensees are required to verify with the appropriate regulatory body that the transferee is licensed to have material of the type, form, and quantity specified on the license and, in the case of category 1, to receive material at the location specified on the license. Verification can be done electronically using LVS or by the vendor or other seller (licensee) contacting the appropriate regulatory body (specifically, NRC or the agreement state that issued the license) directly to confirm the validity of the license. LVS queries WBL and NSTS and enables users to confirm that 1. a category 1 or 2 license is valid and accurate, 2. a licensee is authorized to acquire quantities and types of radioactive 3. the licensee’s current category 1 or 2 inventories in NSTS do not exceed the possession limits. If the licensee is over their possession limit at the time the license verification request is made, the LVS user would receive an error message to contact the regulatory agency that issued the license for a manual license verification, according to NRC officials. For category 1 and 2 licenses, deployment and use of these three systems, combined with the requirement that transferors verify the legitimacy of licenses with the appropriate regulatory body, serve as an impediment to those who would attempt to illicitly obtain radioactive materials using a counterfeit or altered license. In contrast to the requirements for category 1 and 2 quantities of radioactive materials, NRC does not require the tracking of category 3 materials or specifically require vendors to verify the legitimacy of licenses with the appropriate regulatory body for those seeking to acquire category 3 materials. Category 3 quantities of radioactive materials, which are considered dangerous by IAEA, are not tracked in NSTS, nor are licenses for such material issued by most agreement states in WBL. In addition, unlike transfers of category 1 and 2 quantities of radioactive materials, NRC regulations governing transfers of category 3 and smaller quantities of radioactive materials, which were last updated in 1978, do not specifically require transferors to contact the appropriate NRC regional office or agreement state regulator to verify that those wishing to take possession of the material are licensed to do so. Instead, transferors have several options, including obtaining a copy of the transferee’s license, for verifying that the transferee has a license. We recommended in 2008 that NRC include all potentially dangerous radioactive sources in NSTS to address risks that a licensee could obtain radioactive materials in quantities greater than what is allowed by their license. In 2009, after years of study, NRC staff recommended that the Commission approve a final rule requiring that category 3 materials be tracked in NSTS. The recommendation, according to NRC staff, was based on several factors: Category 3 sources are considered dangerous by IAEA The potential to accumulate category 3 sources by aggregation to a more dangerous category 2 level The additional burden to track category 3 was deemed justified given the benefit in improved source accountability NSTS could accommodate additional data for newly tracked sources When considering the recommendation to require that category 3 materials be tracked in NSTS, the Commission was evenly divided. Specifically, the Commission split two to two, and thus did not adopt the recommendation as Commission policy. Accordingly, it continues to be the case that only category 1 and 2 sources are required to be tracked in NSTS. In addition to not requiring tracking of category 3 quantities of radioactive materials, NRC regulations governing transfers of category 3 and smaller quantities of radioactive materials do not specifically require transferors to verify the legitimacy of the license with the appropriate regulatory body. Instead, transferors are required to choose one of several methods to assure themselves that the purchaser has a license. Options include obtaining a copy of the transferee’s license and verifying directly with the appropriate regulatory body that a purchaser has a license to acquire sought category 3 or below radioactive materials. Because category 3 licensees are not specifically required to verify licenses through LVS or directly with the appropriate regulatory body, most agreement state category 3 license information is not in WBL, and transferors cannot verify through LVS that a purchaser is legitimately licensed. Instead, to get agency verification, transferors would need to contact the appropriate NRC regional office or agreement state regulatory body. By contrast, those transferring category 1 and 2 quantities of radioactive materials to other parties must verify license validity either by using LVS or by contacting the relevant NRC regional office or agreement state regulatory authority. The NRC regulations applicable to category 3 and smaller quantities of radioactive materials have not been updated since 1978. According to NRC officials, many transferors of category 3 and smaller quantities of radioactive materials comply with NRC requirements by obtaining and keeping a copy of the transferees’ licenses for their records. However, there is presently no specific requirement that they do so. Because they do not require transferors of category 3 and smaller quantities of radioactive materials to verify the validity of a transferee’s license by contacting the appropriate regulatory body directly, and do not make LVS available for use by these transferors, NRC and agreement states do not have assurance that their systems would prevent bad actors from altering licenses or fraudulently reporting the details of their licenses to transferors, accumulating dangerous materials by aggregation to category 2 or larger quantities on the basis of those fraudulent licenses, and thereby endangering public health and safety. On this point, we recommended in 2007 that NRC explore options to prevent individuals from counterfeiting NRC licenses, especially if this counterfeiting allows transferees to purchase more radioactive materials than they are approved for under the terms of their original licenses. Our testing of NRC and agreement state programs showed guidance— including the suspension of the good faith presumption, screening criteria, and checklists, as well as inspectors’ application of scrutiny during prelicensing site visits—to be effective in two out of our three cases. In a third case, we were able to obtain a license for a category 3 quantity of radioactive materials and secure commitments to purchase a category 2 quantity of radioactive materials by aggregation by altering a paper license. In order to test the effectiveness of NRC’s revised guidance, screening criteria, checklists, and the prelicensing site visit, we established three fictitious companies; leased vacant space in an industrial or office park for each company (two in agreement states, one in an NRC state); and submitted an application to the appropriate NRC regional office or agreement state for a specific radioactive materials license to possess a high-level category 3 quantity source that was only slightly below the threshold for a category 2 quantity source. We designed our test to fail the prelicensing site visit. In each case, we took no actions to prepare the leased space for the site visit. According to NRC officials, while the NRC prelicensing checklist does not require that a site have implemented all the requirements that apply to licensees, its purpose, among other things, is to establish a basis for confidence that radioactive material will be used as specified on the license being sought, and we made no attempt to improve or outfit the site to make it appear as if a legitimate business was operating there. In our view, a prelicensing site visit, conducted with adequate scrutiny, would likely reveal that our fictitious companies were not suitable for a license. In each case, after we submitted a license application, and answered some additional questions from NRC or agreement state officials, we scheduled a time to meet officials from the NRC or agreement state at the location of the fictitious business. Two of the three fictitious companies we established were unable to obtain a license because NRC or agreement state officials found some aspects of the application, the fictitious company, the leased space, or a combination of these not to be credible. In these two cases, the scrutiny of the prelicensing site visit was an important factor in the regulatory bodies not granting our fictitious companies a radioactive materials license. GAO Also Obtained a License for a Fictitious Business in 2007 In 2007, GAO tested controls on the licensing of radioactive materials in two states—a state regulated by the Nuclear Regulatory Commission and an agreement state. To do this, GAO established two fictitious businesses and submitted a radioactive materials application to the relevant regulatory body for each state. GAO did not rent office space for its fictitious businesses but instead used post office boxes for addresses. GAO was able to obtain a genuine radioactive materials license from one of the two regulatory bodies. After obtaining a (paper) license, GAO investigators altered the license so the fictitious company could purchase a much larger quantity of radioactive material rather than the maximum listed on the license. GAO then sought to purchase, from two suppliers, devices containing radioactive materials. These suppliers gave GAO price quotes and commitments to ship the devices containing radioactive materials in an amount sufficient to reach the International Atomic Energy Agency category 3 level—considered dangerous if not safely managed or secured. Importantly, GAO could have accumulated substantially more radioactive material. GAO withdrew its application from the second regulatory body after the license examiners indicated that they would visit the fictitious company’s office before granting the license. An official with the regulatory body told GAO that conducting a site visit was a standard procedure before license applications are approved. adequate to protect public health and safety and minimize danger to life and property. To reach this conclusion, they asked us numerous detailed questions about the nature of our business and our past business experience. We had difficulty answering some of these questions because of the fictitious nature of our business. They asked for key business documents that we could not provide, such as a copy of a business license from the state. Further, they contacted us the day after the site visit about not being able to verify the work history of the company’s radiation safety officer. (We had fabricated this individual’s work history.) This regulatory body performed satisfactorily for all performance indicators during its most recent IMPEP review and was rated satisfactory on all performance indicators in two consecutive IMPEP reviews. In the second case, officials from the regulatory body stated that we would not receive a license until the site was significantly more developed, consistent with operating as a genuine business, and had installed on-site an appropriately safe and secure storage container for the radiological source and posted requisite safety placards specified in the application, among other things. These comments are consistent with NRC officials’ statements that the purpose of the site visit is to have a face-to-face meeting with the applicant to determine whether there is a basis for confidence that the sought radioactive materials will be used as represented in the application. Moreover, the regulatory body stated in a follow-up e-mail that the company must submit additional information on several aspects of the application before a license could be issued: new facility drawings (as the ones provided were not accurate), public radiation dose calculations (as the proposed facility was next to an office building), descriptions of the security measures that would be implemented, and more specific information about how the company planned to transfer the source from the facility to the company’s truck since there was no garage door in the facility. In summary, the regulators stated that they wanted to “see everything that is in place right before you go into business.” This regulatory body had recently been subjected to Heightened Oversight by NRC because of problems uncovered regarding, among other things, the qualifications, retention, and depth of its licensing staff during an earlier IMPEP review. The regulatory body’s performance had improved in the next IMPEP review, and its status was upgraded from Heightened Oversight to Monitoring by the time the prelicensing visit took place. In one of the three cases, we were able to obtain a license for one of our fictitious companies. Specifically, our application was approved and the paper license was handed to our GAO investigator posing as a representative of our fictitious company at the end of the prelicensing site visit. During the application process and site visit, the regulatory official accepted our written and oral assurances of the steps that our fictitious company would take—to construct facilities, establish safety procedures, hire sufficient qualified staff, and construct secure storage areas—after receiving a license. We had taken no actions to implement any of these steps when regulators approved our application and awarded the company a license. The regulatory body in this case used a more lengthy and detailed application than the other two regulatory bodies from which we attempted to obtain licenses. However, notwithstanding NRC’s guidance to suspend the presumption of good faith, the official from the regulatory body accepted our assurances without scrutinizing key aspects of our fictitious business to the extent that the other regulatory bodies had. This regulatory body was found to have satisfactory performance in all performance areas in its most recent IMPEP review. Once we obtained a license, we were able to exploit the absence of a requirement to verify the legitimacy of category 3 licenses with the appropriate regulatory body and obtained commitments to acquire, by accumulating multiple category 3 sources, a category 2 quantity of radioactive material. Importantly, this material is 1 of the 20 radionuclides that NRC previously determined are attractive for use in an RDD (also known as a dirty bomb). Once we obtained a license, we contacted a vendor of the category 3 radioactive source that we specified on our license application. We provided a copy of the license, among other things, to the vendor and subsequently obtained a signed commitment from this vendor to sell us the source. We then altered the paper license and contacted another vendor who also agreed to sell us a category 3 source we specified on our altered license. When combined, these two high-level category 3 sources aggregate to a category 2 quantity of radioactive material. According to IAEA, a category 2 quantity, if not safely managed or securely protected, could cause permanent injury to a person who handled it, or was otherwise in contact with the material, for a short time (minutes to hours). NRC and agreement states require additional security measures for those seeking to acquire this quantity of material. Our fictitious business was not subjected to these more stringent measures and provisions, however, because we were seeking a category 3 quantity of material. It is important to note that we undertook a very similar covert testing of NRC and agreement state radioactive materials licensing programs in 2007 with very similar results. In 2007, we obtained a real radioactive materials license for a below category 3 quantity of material and then altered it to obtain commitments from multiple vendors to sell us, in aggregate, devices containing a category 3 quantity of a radioactive material considered attractive for use in an RDD. This time, we were able to complete a similar covert vulnerability test in which we obtained a real license for a category 3 quantity of radioactive material and altered it to obtain commitments from multiple vendors to sell us, in aggregate, a more dangerous category 2 quantity of a type of radioactive material considered attractive for use in an RDD. Once we received our license from the agreement state and secured commitments from vendors to sell us radiological material, we met with NRC officials in October 2015 to alert them to the outcomes of the investigative component of our work. As a result of our findings, NRC officials told us that they are taking a number of corrective actions. Specifically, NRC is updating training courses for new NRC and agreement state inspectors to reinforce the importance of properly implementing the prelicensing guidance. A key part of this training is to reinforce the suspension of the good faith assumption during prelicensing—particularly during site visits. NRC also developed and provided a training webinar for NRC and agreement state staff to further emphasize prelicensing guidance and providing adequate scrutiny during site visits. In addition, NRC requested that NRC regional offices and agreement states conduct self-assessments of their implementation of the prelicensing guidance and site visits. Finally, according to NRC officials, NRC and agreement state working groups are currently developing and evaluating enhancements to (1) current prelicensing guidance overall, and (2) license verification and transfer requirements and prelicensing guidance for category 3 licenses in particular. However, NRC officials informed us that since the Commission did not adopt a proposal to include category 3 quantities of radioactive materials in NSTS in 2009, NRC had no current plans to take action on requiring category 3 quantities be included in NSTS. Because of this, NRC and the agreement states will continue to be very limited in their ability to track these dangerous quantities of radioactive material. Since 2007, NRC has taken steps to implement several of the recommendations made by GAO and others to enhance the control and accountability of radioactive sources and materials. NRC has deployed data systems—NSTS, WBL, and LVS—that are helping to better track, secure, and control category 1 and 2 quantities of radioactive materials. NRC also developed revised guidance, screening criteria, and checklists covering all five IAEA categories of radioactive materials, and now directs regions and agreement states to conduct prelicensing site visits for all unknown applicants. However, NRC chose not to implement recommendations to better track, secure, and control category 3 materials. GAO testing of the revised guidance, checklists, and prelicensing site visits showed these revised systems to be only partially effective in that our attempts to obtain a license using a fictitious company was successful in one of our three cases—allowing us to obtain commitments from vendors to sell, in aggregate, a category 2 quantity of radioactive material considered attractive for use in an RDD. This demonstrates vulnerabilities similar to those we found in 2007. To its credit, NRC has taken a number of corrective actions in response to our findings, including more training on prelicensing guidance to ensure that NRC and agreement state staff provide adequate scrutiny during prelicensing site visits. NRC has also formed working groups to consider enhancements to the prelicensing process. It will be important for NRC to continue these efforts as part of its process to ensure that its prelicensing guidance, including site visits, is effectively performed. Nonetheless, our work shows that NRC can do more to strengthen its processes of licensing radioactive materials. Specifically, we continue to believe that NRC should implement the recommendations by GAO and others for enhancing the ability to track, secure, and control category 3 sources by including such sources in both NSTS and WBL. Doing so would also enable LVS to query these systems and better enable transferors to verify the legitimacy of those seeking to purchase radioactive materials. As the results of our covert vulnerability testing show, it is possible for someone to obtain a license, which is printed on paper; make alterations to this paper license; and use the altered license for a category 3 source to acquire another category 3 source and thereby accumulate more dangerous, high-risk category 2 quantities. Including category 3 quantities in NSTS and WBL, and requiring transferors to verify the legitimacy of licenses of those seeking to purchase radioactive materials through LVS or with the appropriate regulatory body, would provide greater assurance that a bad actor could not manipulate the system by, for example, altering a paper license, to acquire radioactive materials in aggregate greater than what they are authorized to possess. Moreover, NRC regulations governing the steps that transferors of category 3 quantities of radioactive materials must take to verify that those wishing to take possession of the material are properly licensed to do so have not been updated since 1978 and may not be adequate to protect public health and safety. In contrast, NRC has taken several steps to update its licensing guidance by, among other things, directing regions and agreement states to conduct site visits for unknown applicants and suspending the good faith presumption, which fosters greater scrutiny of applicants. However, because paper licenses are vulnerable to being altered, not requiring transferors of category 3 quantities of radioactive materials to verify the validity of their licenses with the appropriate regulatory body may still allow bad actors to accumulate dangerous materials and in quantities that threaten public health and safety. Finally, prior to issuing a license to a new applicant for category 1 and 2 quantities, NRC and agreement states conduct an on-site security review to verify that the applicant is prepared to implement the required security measures before taking possession of licensed radioactive materials. However, such on-site security reviews are not conducted for applicants of category 3 quantities and below, and regulators told us that they may take up to a year to ensure that applicants have implemented all required health, safety, and security measures. Although category 3 quantities of materials are considered dangerous by IAEA, NRC on-site security reviews are not currently conducted for all prospective licensees that will have access to dangerous quantities of radioactive materials. Because some quantities of radioactive materials are potentially dangerous to human health if not properly handled, we recommend that NRC take action to better track and secure these materials and verify the legitimacy of the licenses for those who seek to possess them. Specifically, we recommend that the Nuclear Regulatory Commission (NRC) take the following three actions: Take the steps needed to include category 3 sources in the National Source Tracking System and add agreement state category 3 licenses to the Web-based Licensing System as quickly as reasonably possible. At least until such time that category 3 licenses can be verified using the License Verification System, require that transferors of category 3 quantities of radioactive materials confirm the validity of a would-be purchaser’s radioactive materials license with the appropriate regulatory authority before transferring any category 3 quantities of licensed materials. As part of the ongoing efforts of NRC working groups meeting to develop enhancements to the prelicensing requirements for category 3 licenses, consider requiring that an on-site security review be conducted for all unknown applicants of category 3 licenses to verify that each applicant is prepared to implement the required security measures before taking possession of licensed radioactive materials. We provided a draft of this product to NRC for comment. In its written comments, reproduced in appendix I, NRC neither explicitly agreed nor disagreed with our recommendations, but noted that the agency has formal evaluations underway considering all three recommendations. Specifically, NRC stated that the agency would consider GAO’s recommendations as part of the working groups the agency has established to evaluate (1) including category 3 sources in WBL and NSTS, (2) license verification transfer requirements for category 3 sources, and (3) enhancing security and safety measures as part of the prelicensing process. In addition, NRC recommended that we revise the first recommendation for clarity. We modified the language in this recommendation to provide greater clarity. NRC also provided technical comments that were incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairman of the Nuclear Regulatory Commission, the appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix II. In addition to the contact named above, Ned Woodward (Assistant Director), Antoinette Capaccio, Frederick Childers, Jenny Chow, John Delicath, Barbara Lewis, Steven Putansu, Brynn Rovito, Kevin Tarmann, and the Forensic Audits and Investigative Service team made key contributions to this work.
In 2007, GAO reported weaknesses in NRC's licensing program as GAO investigators, after setting up fictitious companies, were able to obtain an NRC license and then alter it to obtain agreements to purchase devices containing, in aggregate, a dangerous quantity of radioactive materials. GAO was asked to review and assess the steps NRC and agreement states have taken to strengthen their licensing processes. This report examines (1) the steps NRC and agreement states have taken to ensure that radioactive materials licenses are granted only to legitimate organizations and licensees can obtain materials only in quantities allowed by their licenses; and (2) the results of covert vulnerability testing designed to test the effectiveness of these controls. GAO reviewed relevant guidance documents, regulations, and analyses of orders, and interviewed NRC and state officials. GAO also established three fictitious businesses and applied for a radioactive materials license for each. The Nuclear Regulatory Commission (NRC) and the 37 states it permits to grant licenses for radioactive materials—called agreement states—have taken several steps since 2007 to help ensure that licenses are granted only to legitimate organizations and that licensees can only obtain such materials in quantities allowed by their licenses. However, NRC and agreement states have not taken some measures to better control some dangerous quantities of radioactive materials. The International Atomic Energy Agency established a system ranking quantities of certain radioactive materials into five categories based on their potential to harm human health, with, in descending order of danger, categories 1, 2, and 3 all considered dangerous. NRC developed revised guidance, screening criteria, and a checklist, among other things, and now directs NRC regions and agreement states to conduct prelicensing site visits—focusing on questions related to the applicant's business operations, facility, radiation safety operations, and personnel qualifications for all unknown applicants. NRC, however, has not strengthened controls for all categories of radioactive material considered dangerous. Unlike its process for applicants for category 1 and 2 quantities of radioactive materials, for category 3 applicants NRC does not review specific security measures before a license is issued. NRC has also developed and deployed the National Source Tracking System (NSTS), the Web-based Licensing System (WBL), and the License Verification System to better control some materials. However, these systems focus on more dangerous category 1 and 2 quantities but not category 3 quantities. Further, NRC does not specifically require that the validity of category 3 licenses be verified by the seller with NRC or the agreement states—creating risks that licenses could be counterfeited or that licensees could obtain radioactive materials in quantities greater than what is allowed by their licenses. GAO's covert testing of NRC requirements showed them to be effective in two out of our three cases; in a third case, GAO was able to obtain a license and secure commitments to purchase, by accumulating multiple category 3 quantities of materials, a category 2 quantity of a radioactive material considered attractive for use in a “dirty bomb”—which uses explosives to disperse radioactive material. To test NRC's prelicensing processes, GAO established three fictitious companies, leased vacant space for each company (two in agreement states, one in an NRC state), and submitted an application to the appropriate agreement state or NRC office for a license to possess a category 3 source only slightly below the threshold for category 2. GAO made no attempt to outfit the site to make it appear as if a legitimate business was operating there. In the two cases where GAO was unable to obtain a license, the scrutiny provided by NRC or agreement state (regulatory body) officials during the prelicensing site visit led to the license not being granted. In the third case, the official from the regulatory body accepted GAO's assurances without scrutinizing key aspects of the fictitious business, which led to a license being obtained. NRC is currently taking corrective actions to provide training to NRC and agreement state officials to emphasize greater scrutiny in conducting prelicensing site visits. According to NRC officials, NRC and agreement state working groups are currently developing and evaluating enhancements to (1) prelicensing guidance overall and (2) license verification and transfer requirements for category 3 licenses. GAO is making three recommendations to NRC, including that NRC (1) take steps to include category 3 quantities of radioactive materials in NSTS and WBL, and (2) require that transferors of category 3 quantities of radioactive materials confirm the validity of licenses with regulators before selling or transferring these materials. GAO provided a draft of this report to NRC for comment. NRC neither agreed nor disagreed with GAO's recommendations, but noted that the agency has formal evaluations underway considering all three recommendations.
Consumer-directed health plans generally include three components: a health plan with a high deductible, a savings account—such as an HSA—to pay for health care expenses, and enrollee decision-support tools. An insurance plan with a high-deductible. HSA-eligible plans are required to meet certain statutory criteria. The plans must have a minimum deductible amount—$1,050 for single coverage and $2,100 for family coverage in 2006—and a maximum limit on enrollee out-of-pocket spending—$5,250 for single coverage and $10,500 for family coverage in 2006. Preventive care services may be exempted from the deductible requirement, but coverage of most other services, including prescription drugs, is subject to the deductible. After meeting the deductible, the HSA- eligible plan pays for most of the cost of covered services until the enrollee meets the out-of-pocket spending limit, at which point the plan pays 100 percent of the cost of covered services. A savings account to pay for health care expenses. There are several types of savings accounts that may be associated with consumer-directed health plans, with HSAs being among the most prominent. An HSA is a tax- advantaged savings account established for paying qualified medical expenses. Individuals are eligible to open an HSA if they are enrolled in an HSA-eligible plan and have no other health coverage, with limited exceptions. HSAs are owned by the account holder, and the accounts are portable—individuals may keep their accounts if they switch jobs or enroll in a non-HSA-eligible health plan. Both employers and individuals may contribute to HSAs, and individuals may claim a deduction on their federal income taxes for their HSA contributions. HSA balances can earn interest; roll over from year to year; and be invested in a variety of financial instruments, such as mutual funds. HSA-eligible plan enrollees who choose to pay for medical expenses from their HSA may access their account funds by check, by debit card, or by authorizing insurance carriers to allow providers to directly debit their account funds. HSAs are subject to annual contribution limits. In 2006, contributions were limited to 100 percent of the deductible, but not more than $2,700 for single coverage or $5,450 for family coverage. Contributions, earned interest, and withdrawals for qualified medical expenses are not subject to federal income taxation. A financial institution, such as a bank or insurance company, typically administers the account. Decision-support tools. HSA-eligible plans typically provide enrollee decision-support tools that include, to some extent, information on the cost of health care services and the quality of health care providers. Experts suggest that reliable information about the cost of particular health care services and the quality of specific health care providers would help enrollees become more actively engaged in making health care purchasing decisions. These tools may be provided by health insurance carriers to all health insurance plan enrollees, but are likely to be more important to enrollees of HSA-eligible plans who have a greater financial incentive to make informed decisions about the quality and costs of health care providers and services. HSA-eligible plans had lower premiums, higher deductibles, and higher out-of-pocket spending limits than traditional plans in 2005. Specifically, data from national employer health benefits surveys regarding plans offered in the group market indicate: Premiums for HSA-eligible plans averaged 35 percent less than employers’ traditional plan premiums for single coverage and 29 percent less for family coverage in 2005. Annual deductibles for HSA-eligible plans averaged $1,901 for single coverage and $4,070 for family coverage in 2005—nearly six times greater than those of employers’ traditional plans. The median annual out-of-pocket spending limit for HSA-eligible plans offered by large employers was $3,500 for single coverage in 2005, which was higher than the median out-of-pocket spending limit of $1,960 reported for traditional plans. The HSA-eligible and traditional plans we reviewed covered the same broad categories of services, including preventive services, and also used similar provider networks in 2005. Our illustration of enrollees’ potential health care costs—including premiums, deductibles, and other out-of-pocket costs for covered services—for the three employers’ 2005 health plans we reviewed showed that HSA-eligible plan enrollees would incur higher annual costs than PPO enrollees for extensive use of health care, but would incur lower annual costs than PPO enrollees for low to moderate use of health care. Specifically, we estimated that in the event of an illness or injury resulting in a hospitalization costing $20,000, the total costs incurred by the three employers’ HSA-eligible plan enrollees would be 47 to 83 percent higher than those faced by the employers’ PPO enrollees. In contrast, we estimated that the total costs paid by HSA-eligible plan enrollees who used low to moderate amounts of health care, visiting the doctor for illnesses or injuries six times in one year, would be 48 to 58 percent lower than the costs paid by the PPO enrollees. If HSA-eligible plan enrollees used tax- advantaged funds that they, or someone other than their employer, contributed to their HSA, their costs could have been lower than our estimates. HSA-eligible plan enrollees generally had higher incomes than comparison groups. The average adjusted gross income of the estimated 108,000 tax filers reporting HSA contributions in 2004 was about $133,000, compared with $51,000 for all tax filers under age 65, according to IRS data. Moreover, 51 percent of tax filers reporting HSA contributions had an adjusted gross income of $75,000 or more, compared with 18 percent of all tax filers under age 65. (See fig. 1.) We also found similar income differences between HSA-eligible plan and traditional plan enrollees when we examined other data sources from the group and individual markets. Among Federal Employees Health Benefits Program (FEHBP) enrollees actively employed by the federal government, 43 percent of HSA-eligible plan enrollees earned federal incomes of $75,000 or more, compared with 23 percent for all enrollees in 2005. Additionally, two of the three employers we reviewed and eHealthInsurance reported that HSA-eligible plan enrollees had higher incomes than did traditional plan enrollees in 2005. The data sources we examined did not conclusively indicate whether HSA- eligible plan enrollees were older or younger than comparison groups. IRS data suggest that the average age of tax filers who reported HSA contributions was about 9 years higher than the average age of all tax filers under age 65 in 2004. Similarly, eHealthInsurance reported that in the individual market the average age of its HSA-eligible plan enrollees was 5 years higher than that of its traditional plan enrollees in 2005. In contrast, data from FEHBP and the three employers we reviewed indicate that the average age of HSA-eligible plan enrollees, excluding retirees, was 2 to 6 years lower than that of comparison groups of enrollees. Just over half of HSA-eligible plan enrollees and most employers contributed to HSAs. About 55 percent of HSA-eligible plan enrollees reported HSA contributions in 2004, according to our analysis of data obtained from IRS and a publicly available survey. HSA-eligible plan enrollees from the employers we reviewed were more likely to contribute to an HSA when their employer also offered account contributions. Among tax filers who claimed a deduction for an HSA in 2004, the average deduction was about $2,100 and the average amount deducted increased with income. About two-thirds of employers offering HSA-eligible plans contributed to their employees’ HSAs in 2005, according to two national employer health benefits surveys. In 2004, the average employer HSA contribution reported to IRS was about $1,064. Account holders used HSA funds to pay for medical care and to accumulate savings. About 45 percent of tax filers reporting an HSA contribution in 2004—made by themselves, their employers, or others on their behalf—also reported withdrawing funds in 2004. The average annual amount withdrawn by these tax filers was about $1,910. About 90 percent of these withdrawn funds were used to pay for expenses identified under the Internal Revenue Code as eligible medical expenses. IRS data show that about 40 percent of all funds contributed to HSAs in 2004 were withdrawn from the accounts by the end of the year. In addition to using HSAs for medical and other expenses, account holders appeared to use their HSA as a savings vehicle. About 55 percent of tax filers reporting HSA contributions in 2004 withdrew no money from their account in 2004. We could not determine whether HSA-eligible plan enrollees accumulated balances because they did not need to use their accounts (that is, they paid for care from out-of-pocket sources or did not need health care during the year) or because they reduced their health care spending as a result of financial incentives associated with the HSA-eligible plan and HSA. However, many focus group participants reported using their HSA as a tax-advantaged savings vehicle, accumulating HSA funds for future use. HSA-eligible plan enrollees who participated in our focus groups at the three employers we reviewed generally reported positive experiences with their plans. These focus group participants, who had voluntarily elected to enroll in HSA-eligible plans as one of several choices offered by their employers, cited the ability to accumulate savings, the tax advantages of having an HSA, and the ability to use an HSA debit card as positive aspects of HSAs. Participants reported few problems obtaining care, and when given a choice, many reported that they had reenrolled in the HSA-eligible plan. While focus group participants enrolled in HSA-eligible plans generally understood the key attributes of their plan, such as low premiums, high deductibles, and the mechanics of using the HSA, they were confused about certain other features. For example, many participants understood that certain preventive services were covered free of charge, but they also had trouble distinguishing between the preventive services and other services provided during a preventive office visit. Moreover, many participants were unsure what medical expenses qualified for payment using their HSA. Few participants researched the cost of hospital or physician services before obtaining care, although many participants researched the cost of prescription drugs. A few participants reported asking physicians about the cost of services, but others expressed discomfort with asking physicians about cost. For example, one participant said, “Americans don’t negotiate. It’s not polite to question the value of work.” Participants of one focus group also reported not initially understanding the extent to which they needed to manage and take responsibility for their health care, including by asking questions about the cost of services and medications. Participants also reported that only limited information was available regarding key quality measures for hospitals and physicians, such as the volume of procedures performed and the outcomes of those procedures. The decision-support tools provided with consumer-directed health plans we previously reviewed were limited and did not provide sufficient information to allow enrollees to fully assess cost and quality trade-offs of health care purchasing decisions. Most participants reported that they would not recommend HSA-eligible plans to all consumers. Some participants said they enrolled in the HSA- eligible plan specifically because they did not anticipate getting sick, and many said they considered themselves and their families as being fairly healthy. Most participants would recommend the plans to healthy consumers, but not to those who use maintenance medication, have a chronic condition, have children, or may not have the funds to meet the high deductible. In closing, as more individuals face the choice of enrolling in HSA-eligible plans or other consumer-directed health plans, they will likely weigh the savings potential and financial risks associated with these plans in relation to their own health care needs and financial circumstances. Because healthier individuals who use little health care could incur lower costs under HSA-eligible plans than under traditional plans, when given a choice they may be more likely to select an HSA-eligible plan than will less healthy individuals who use greater amounts of health care. It will be important to monitor enrollment trends and assess their implications for the cost of health care coverage for all HSA-eligible and traditional plan enrollees. Few of the HSA-eligible plan enrollees who participated in our focus groups researched cost before obtaining health care services. This may be due in part to a reluctance of consumers to question health care providers about the cost of their services and the dearth of information provided by insurance carriers to their enrollees about the cost of health care services under their plans—a limitation that insurance carriers are beginning to address. According to proponents, an increase in such health care consumerism can help restrain health care spending increases under the plans. Such an increase will likely require time, education, and improved decision-support tools that provide enrollees with more information about the cost and quality of health care providers and services. Finally, while HSA-eligible plan enrollees we spoke with were generally satisfied with their plans, it is notable that these enrollees each had a choice of health plans and voluntarily selected the HSA-eligible plan. Their caution that HSA-eligible plans may not be appropriate for everyone suggests that satisfaction may be lower when employees are not given a choice or when employer contributions to premiums or accounts do not sufficiently offset the potentially greater costs faced by HSA-eligible plan enrollees. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the Subcommittee may have. For future contacts regarding this testimony, please contact John E. Dicken at (202) 512-7119 or at dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Randy DiRosa, Assistant Director; Pamela N. Roberto; and Patricia Roy made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Health savings accounts (HSA) and the high-deductible health insurance plans that are eligible to be coupled with them are a new type of consumer-directed health plan attracting interest among employers and consumers. HSA-eligible plans constitute a small but growing share of the private insurance market, and the novel structure of the plans has raised questions about how they could affect enrollees' health care purchasing decisions and costs. This statement is based on GAO's August 2006 report entitled "Consumer-Directed Health Plans: Early Enrollee Experiences with Health Savings Accounts and Eligible Health Plans" (GAO-06-798). In this report, GAO reviewed (1) the financial features of HSA-eligible plans in comparison with those of traditional plans, such as preferred provider organizations (PPO); (2) the characteristics of HSA-eligible plan enrollees in comparison with those of traditional plan enrollees or others; (3) HSA funding and use; and (4) enrollees' experiences with HSA-eligible plans. HSA-eligible plans had lower premiums, higher deductibles, and higher out-of-pocket spending limits than did traditional plans in 2005, but both plan types covered similar services, including preventive services. For the three employers' health plans GAO reviewed to illustrate enrollees' potential health care costs, GAO estimated that HSA-eligible plan enrollees would incur higher annual costs than PPO enrollees for extensive use of health care, but would incur lower annual costs than PPO enrollees for low to moderate use of health care. HSA-eligible plan enrollees generally had higher incomes than comparison groups, but data on age differences were inconclusive. In 2004, 51 percent of tax filers reporting an HSA contribution had an adjusted gross income of $75,000 or more, compared with 18 percent of all tax filers under 65 years old. Two of the three employers GAO reviewed, the Federal Employees Health Benefits Program, and a national broker of health insurance also reported that HSA-eligible plan enrollees had higher incomes than traditional plan enrollees in 2005. Just over half of all HSA-eligible plan enrollees and most employers contributed to HSAs, and account holders used their HSA funds to pay for medical care and accumulate savings. About 55 percent of HSA-eligible plan enrollees reported HSA contributions to the Internal Revenue Service in 2004, and about two-thirds of employers offering HSA-eligible plans contributed to their employees' HSAs. About 45 percent of tax filers reporting 2004 HSA contributions also reported that they withdrew funds in that year, and 90 percent of these funds were withdrawn for qualified medical expenses. The other 55 percent of those reporting HSA contributions did not withdraw any funds from their HSA in 2004. HSA-eligible plan enrollees who participated in GAO's focus groups generally reported positive experiences, but most would not recommend the plans to all consumers. Few participants reported researching cost before obtaining health care services, although many researched the cost of prescription drugs. Most participants were satisfied with their HSA-eligible plans and would recommend them to healthy consumers, but not to those who use maintenance medication, have a chronic condition, have children, or may not have the funds to meet the high deductible.
The Ebola outbreak was first identified in the remote Guinean village of Meliandou in December 2013 and subsequently spread to neighboring Sierra Leone and Liberia and later to other African countries, including Mali, Nigeria, and Senegal (see fig. 1). Guinea, Liberia, and Sierra Leone experienced the largest number of confirmed, probable, or suspected cases and deaths. As of June 2016, WHO reported 28,616 confirmed, probable, or suspected Ebola cases and 11,310 deaths in these three countries since the onset of the outbreak (see fig. 2). WHO reported the majority of these cases and deaths between August 2014 and December 2014. After 2014, the number of new cases began to decline, and on March 29, 2016, the WHO Director-General terminated the Public Health Emergency of International Concern designation related to Ebola in West Africa and recommended lifting the temporary restrictions to travel and trade with Guinea, Liberia, and Sierra Leone. However, seven confirmed and three probable cases were reported in Guinea, and three confirmed cases of Ebola were reported in Liberia in March 2016 and April 2016, respectively. Figure 3 shows the numbers of confirmed, probable, and suspected Ebola cases and deaths in Guinea, Liberia, and Sierra Leone by month from August 2014 to December 2015. USAID noted that the Ebola outbreak caused disruptions to health systems and adverse economic impacts in Guinea, Liberia, and Sierra Leone. Health-care resources in these countries were diverted from other programs to Ebola response efforts, and some patients and health-care workers avoided health facilities for fear of contracting the disease. USAID also noted that these countries experienced job loss, disruptions to trade, reduced agricultural production, decreased household purchasing power, and increased food insecurity as a consequence of the outbreak. The closure of international borders in response to the outbreak hampered economic activity by limiting trade and restricting the movement of people, goods, and services. In addition, the Ebola outbreak occurred at the beginning of the planting season, which affected agricultural markets and food supplies. U.S. efforts to respond to the Ebola outbreak in West Africa began in March 2014 when the Centers for Disease Control and Prevention (CDC) deployed personnel to help with initial response efforts. In August 2014, the U.S. ambassador to Liberia and the U.S. chiefs of mission in Guinea and Sierra Leone declared disasters in each country. Following the declarations, USAID and CDC deployed medical experts to West Africa to augment existing staff in each of the affected countries. In August 2014, a USAID-led Disaster Assistance Response Team (DART) deployed to West Africa. USAID-funded organizations began to procure and distribute relief commodities and establish Ebola treatment units, and CDC began laboratory testing in West Africa. In September 2014, the Department of Defense (DOD) began direct support to civilian-led response efforts under Operation United Assistance. In September 2014, the President announced the U.S. government’s strategy to address the Ebola outbreak in the three primarily affected West African countries. The National Security Council, the Office of Management and Budget, USAID, and State developed an interagency strategy that has been revised over time to reflect new information about the outbreak and changes in international response efforts. The strategy is organized around four pillars and their associated activities (see table 1). Multiple U.S. agencies have played a role in the U.S. government’s response to the Ebola outbreak in West Africa. USAID managed and coordinated the overall U.S. effort to respond to the Ebola outbreak overseas, and State led diplomatic efforts related to the outbreak. CDC led the medical and public health component of the U.S. government’s response efforts. For example, CDC provided technical assistance and disease control activities with other U.S. agencies and international organizations. In addition, CDC assisted in disease surveillance and contact tracing to identify and monitor individuals who had come into contact with an Ebola patient. DOD coordinated U.S. military efforts with other agencies, international organizations, and nongovernmental organizations. DOD also constructed Ebola treatment facilities, transported medical supplies and medical personnel, trained health-care workers, and deployed mobile laboratories to enhance diagnostic capabilities in the field. Other federal agencies, including the Department of Agriculture, also contributed to the overall U.S. response. Although U.S. response efforts have focused primarily on Liberia, the United States has been engaged in all three primarily affected West African countries. The U.S. government also undertook response efforts in Mali following an Ebola outbreak in that country in October 2014. Currently, U.S. response efforts include regional activities that encompass multiple countries, including activities to build the capacity of West African countries to prepare for and respond to infectious disease outbreaks. Prior to the fiscal year 2015 appropriation, USAID and State funded activities in response to the Ebola outbreak using funds already appropriated. On December 16, 2014, Congress appropriated approximately $5.4 billion in emergency funding for Ebola preparedness and response to multiple U.S. agencies as part of the Consolidated and Further Continuing Appropriations Act, 2015 (the fiscal year 2015 consolidated appropriations act). Within the fiscal year 2015 consolidated appropriations act, Congress provided approximately $2.5 billion to USAID and State for necessary expenses to assist countries affected by, or at the risk of being affected by, the Ebola outbreak and for efforts to mitigate the risk of illicit acquisition of the Ebola virus and to promote biosecurity practices associated with Ebola outbreak response efforts. These funds were made available from different accounts and are subject to different periods of availability (see table 2). The Act requires State, in consultation with USAID, to submit reports to the Senate and House Committees on Appropriations on the proposed uses of the funds on a country and project basis, for which the obligation of funds is anticipated, no later than 30 days after enactment. These reports are to be updated and submitted every 30 days until September 30, 2016, and every 180 days thereafter until all funds have been fully expended. The reports are to include (1) information detailing how the estimates and assumptions contained in the previous reports have changed and (2) obligations and expenditures on a country and project basis. In addition, Section 9002 of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, authorized USAID and State to use funds appropriated for the Global Health Programs (GHP), International Disaster Assistance (IDA), and Economic Support Fund (ESF) accounts to reimburse appropriation accounts administered by USAID and State for obligations incurred to prevent, prepare for, and respond to the Ebola outbreak prior to enactment of the Act. USAID reports that it has made reimbursements totaling $401 million for obligations made prior to the Act, using $371 million from the IDA account and $30 million from the ESF account of the fiscal year 2015 appropriation. USAID reports that these obligations were incurred by the Office of U.S. Foreign Disaster Assistance (OFDA); the Office of Food for Peace (FFP); the Bureau for Global Health; and missions in Liberia, Guinea, and Senegal. Figure 4 shows USAID’s reported obligations by office and bureau. State reported that it did not reimburse funding for obligations made prior to the act. As of July 1, 2016, USAID and State had obligated 58 percent and disbursed more than one-third of the $2.5 billion appropriated by the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 for Ebola response and preparedness activities. USAID and State obligated funds from various appropriation accounts for Ebola activities during different stages of the response, with the largest shares of funding going to Pillar 1 activities to control the outbreak and to support regional Ebola activities in multiple countries. In response to the Ebola outbreak, USAID obligated funds from the IDA account, and State obligated funds from the Diplomatic & Consular Programs (D&CP) account. As the U.S. government shifted its focus to mitigating second-order impacts and strengthening global health security, USAID obligated funds from the ESF and GHP accounts, and State obligated Nonproliferation, Antiterrorism, Demining, and Related Programs (NADR) account funds for biosecurity activities. As of July 1, 2016, USAID and State had obligated a total of almost $1.5 billion (58 percent) and disbursed $875 million (35 percent) of the $2.5 billion appropriated for Ebola response and preparedness activities (see fig. 5). USAID and State used different appropriation accounts to fund Ebola activities. As shown in table 3 below, USAID allocated the greatest amount of funding for Ebola activities from the IDA account, followed by the ESF and GHP accounts. State allocated funding from the NADR account and D&CP account for Ebola activities. USAID notified the relevant congressional committees on April 8, 2016, that it intended to obligate $295 million in ESF Ebola funds, of which USAID and State would reprogram $215 million for international response activities related to Zika, including $78 million to the CDC for Zika response activities, and $80 million for the CDC’s international coordination of response efforts related to Ebola. As USAID obligates funding for Zika activities, the unobligated balances in the ESF account will continue to decrease. ESF funds appropriated in the Act must have been obligated by September 30, 2016. See figure 6 below for obligations and disbursements by appropriation account, as of July 1, 2016. USAID and State used different appropriation accounts to fund Ebola activities across the four pillars of the U.S. government’s Ebola strategy. For example, USAID primarily used IDA account funds to control the outbreak (Pillar 1), ESF account funds to mitigate second-order impacts of Ebola (Pillar 2), and GHP account funds to strengthen global health security (Pillar 4). USAID allocated all of the funding from the Operating Expenses account to building coherent leadership and operations (Pillar 3). As of July 1, 2016, USAID had obligated more than 60 percent of funds allocated for Pillars 1, 2, and 4 and obligated just under 50 percent of the funds allocated for Pillar 3; USAID and State had disbursed 85 percent of Pillar 1 obligations. See table 4 for the status of allocated funding by Ebola strategy pillar. USAID allocated the greatest amount of funding to activities to control the Ebola outbreak (Pillar 1) and allocated approximately similar amounts of funding to mitigate the second-order impacts of Ebola (Pillar 2) and to strengthen global health security (Pillar 4). Figure 7 shows allocations, obligations, and disbursements by Ebola strategy pillar, as of July 1, 2016. See appendix II for trends in obligations and disbursements by Ebola strategy pillar, as of July 1, 2016. USAID allocated the largest share of its funding to support regional Ebola activities. Ebola funding for regional activities includes all five appropriation accounts administered by USAID from the fiscal year 2015 appropriation (IDA, ESF, GHP, USAID Operating Expenses, and USAID’s Office of Inspector General) and State’s NADR account. The GHP account represents 42 percent of obligated funding for regional activities, while ESF and IDA represent approximately 29 percent and 21 percent of such funding, respectively. See table 5 below for the status of allocated Ebola funding by geographic area. Of the three countries most affected by the Ebola virus, USAID obligated and disbursed the most funding for Liberia, as the U.S. government took a lead role in Ebola response efforts in this country. Figure 8 shows allocations, obligations, and disbursements for Liberia, compared with Sierra Leone and Guinea. See appendix III for trends in obligations and disbursements for each country, as of July 1, 2016. Prior to the enactment of the Act in December 2014, USAID had obligated $401 million to respond to the Ebola outbreak. USAID obligated the majority of this funding from the IDA account for Pillar 1 activities to control the outbreak in Liberia. State did not obligate any funding prior to the Act but began obligating D&CP funding to respond to the Ebola outbreak in April 2015. During the early stages of the response to the Ebola outbreak, USAID obligated IDA funds for the majority of its Ebola activities, both before and after it received funding for Ebola activities from the Act. By February 2015, USAID had obligated more than $550 million in IDA funding. USAID obligated IDA account funding at a lower rate after July 1, 2015, as USAID began to shift its focus to recovery activities. As of July 1, 2016, USAID had obligated more than 60 percent of IDA account funding and disbursed more than 80 percent of obligations. USAID used the majority of IDA funds to control the outbreak in Liberia (Pillar 1). Activities to control the outbreak included providing care to Ebola patients; supporting safe burials; promoting infection prevention and control at health facilities; distributing Ebola protection and treatment kits; and conducting Ebola awareness, communication, and training programs. USAID obligated approximately 60 percent of IDA funds for Liberia as of July 1, 2016, which reflects the U.S. government’s lead role in responding to the Ebola outbreak in that country. USAID plans to use the remaining approximately 40 percent of unobligated IDA funds for future Ebola response activities as needed. According to USAID officials, since April 2016, OFDA has programmed IDA funding to provide support for maintaining a residual response capacity and for transitioning to long-term development efforts. According to OFDA officials, these activities will require a modest amount of funding in fiscal year 2017. OFDA officials also noted that the unobligated IDA balance would remain available to address any new cases or outbreaks of Ebola that spread beyond the ability of the host governments to contain them. State obligated most of its D&CP account funding to respond to the Ebola outbreak in early 2015. In February 2015, State notified the relevant congressional committees of its intention to allocate $33 million of $36 million in D&CP account funding to the Office of Medical Services. By the end of May 2015, the Office of Medical Services had obligated 97 percent of its allocated funding and disbursed all of its obligated funding for medical evacuations; overseas hospitalization expenses; and personal protective equipment, among other things. State’s Bureau of Administration, which provided grants to international schools affected by the Ebola outbreak, obligated 95 percent of its allocated D&CP funding by the end of April 2015. State’s Africa Bureau also obligated funding in the early stages of the Ebola response, as it provided support for State’s Ebola Coordination Unit and U.S. embassies affected by Ebola. The Africa Bureau obligated more than half of its D&CP funding for Ebola activities by the end of July 2015. As of June 30, 2016, State had obligated 94 percent of the D&CP account funding and disbursed 99 percent of its obligated funding. USAID started to focus its efforts on mitigating second-order impacts of the outbreak and strengthening global health security between June 1, 2015, and October 1, 2015, using funding from the ESF and GHP accounts. As of May 1, 2015, State began obligating NADR funds for biosecurity activities. USAID increased obligations of ESF by approximately $68 million between June 1, 2015, and October 1, 2015, as USAID offices such as the Bureau for Global Health and the U.S. Global Development Lab obligated ESF funds for activities to mitigate the second-order impacts of the outbreak (Pillar 2). USAID disbursements of ESF funding were less than $5 million until September 1, 2015. As of July 1, 2016, USAID had obligated $251 million from the ESF account for Ebola activities, which represented 35 percent of its total allocated funding from the ESF account. Of the ESF funding that USAID obligated to mitigate the second-order impacts of Ebola, USAID obligated the majority of such funding for activities in Liberia, Sierra Leone, and Guinea. Such activities included the development of facilities to decontaminate health-care workers and equipment, restoring routine health services, and strengthening health information systems to better respond to future disease outbreaks. Approximately 45 percent of obligated ESF funds supported such activities in Liberia, while 22 percent and 17 percent funded activities in Sierra Leone and Guinea, respectively. As the numbers of new Ebola cases declined, USAID also began to focus on longer-term efforts to strengthen global health security, using funding from the GHP account. Accordingly, USAID reported no obligations of GHP funding before June 1, 2015 and no disbursements until August 1, 2015. As of July 1, 2016, USAID had obligated 59 percent and disbursed 10 percent of its GHP account funds for Ebola response activities. The Bureau for Global Health, which programs the majority of GHP funding, added all of its GHP funding for Ebola activities to existing awards. Because of the method of payment used for these awards, funding from older appropriations is used first. As a result, the percentage of GHP funding that USAID has disbursed from the fiscal year 2015 appropriation for Ebola activities is the lowest (10 percent) of any account. GHP funding appropriated in the Act does not expire, so USAID can obligate and disburse these funds in future fiscal years. USAID has obligated all of the funding in the GHP account to strengthen global health security (Pillar 4). Such activities included building laboratory capacity, strengthening community surveillance systems to detect and monitor Ebola and other diseases, and building the capacity of West African governments to prepare for and respond to infectious disease outbreaks. Nearly all GHP funding is regional funding rather than funding for a specific Ebola-affected country. For trends in obligations and disbursements of IDA, ESF, and GHP funds for Ebola from February 3, 2015 to July 1, 2016, see appendix IV. State’s Bureau of International Security and Nonproliferation did not begin obligating funding from the NADR account until May 1, 2015, and did not disburse any NADR funds until September 1, 2015, for biosecurity activities. Such activities included training on biosecurity best practices and strategies for Ebola sample management as well as conducting biosecurity risk assessments. All NADR funding supports regional activities in West Africa rather than activities in a single country. As of July 1, 2016, State had obligated all of its NADR funding and disbursed 40 percent of obligated funding from this account. Twenty-one of USAID’s 271 reimbursements for obligations incurred prior to the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, were not in accordance with the reimbursement provisions of the Act. These 21 transactions account for over $60 million, or roughly 15 percent, of the approximately $401 million reimbursed. Because USAID did not have the legal authority to make the reimbursements that were not in accordance with the reimbursement provisions in the Act, these reimbursements represent unauthorized transfers. As of October 2016, USAID had not determined whether it had budget authority to support the obligations against the accounts that were improperly reimbursed, possibly resulting in a violation of the Antideficiency Act. In addition, USAID has not developed written policies or procedures to guide staff on appropriate steps to take for the reimbursement process. Such policies or procedures could provide USAID with reasonable assurance that it will comply with reimbursement provisions and mitigate the risk of noncompliance. Of 271 reimbursements that USAID made for obligations incurred prior to the enactment of the Act, USAID made 21 reimbursements (totaling over $60 million) that were not in accordance with the Act. These 21 reimbursements represent roughly 15 percent of the approximately $401 million that USAID obligated for reimbursements, of the almost $1.5 billion that had been obligated as of July 1, 2016. USAID made the other 250 reimbursements (totaling about $341 million) in accordance with the reimbursement provisions of the Act for funds that OFDA and FFP obligated prior to enactment of the Act. To assess whether USAID’s reimbursements were in accordance with the Act, we reviewed USAID documentation to determine whether each reimbursement met the four provisions described in the text box below. Reimbursement Provisions of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 The reimbursement was made to the same appropriation account as the account from which the U.S. Agency for International Development originally obligated the funds. The obligation was incurred prior to the December 16, 2014, enactment of the Act. The obligation was incurred to prevent, prepare for, and respond to the Ebola outbreak. The reimbursement was made from the International Disaster Assistance, Economic Support Fund, and Global Health Programs accounts of the fiscal year 2015 appropriation. For the 21 reimbursements (totaling over $60 million), USAID did not reimburse the same appropriation accounts as the accounts from which USAID originally obligated the funds. Moreover, for 4 of the 21 reimbursements, USAID reimbursed obligations for which it did not document that it incurred the obligation to prevent, prepare for, or respond to the Ebola outbreak. Specifically, USAID did not reimburse the same appropriation account as the account from which USAID originally obligated the funds. The Bureau for Global Health; FFP; and the missions in Guinea, Liberia, and Senegal incurred over $60 million in obligations from the GHP, Title II of the Food for Peace Act (Title II), and Development Assistance accounts. Although USAID used funding from the ESF and IDA accounts of the fiscal year 2015 appropriation to make 21 reimbursements for these obligations, the agency did not reimburse the GHP, Title II, and Development Assistance accounts from which USAID originally obligated the funds, in accordance with the Act (see table 6). Instead, USAID reimbursed the office, bureau or mission that originally obligated the funds. These reimbursements were made to different accounts, with different spending authorities, than those from which the original obligation was made. For example, FFP obligated approximately $30 million in Title II account funds for the World Food Program’s regional program to address the urgent food needs of populations affected by the Ebola outbreak, but USAID used approximately $30 million in IDA account funds from the fiscal year 2015 appropriation to reimburse FFP and credited the funds to FFP’s IDA account. USAID reimbursed obligations for which it did not document that it incurred the obligation to prevent, prepare for, and respond to the Ebola outbreak. USAID made four reimbursements, totaling about $3.3 million, for which USAID was unable to provide documentation showing that the original obligations funded activities to prevent, prepare for, and respond to the Ebola outbreak. For example, USAID reimbursed approximately $1.3 million for an obligation incurred by the Bureau for Global Health in September 2014. However, USAID officials noted that they did not document the amount or purpose of the obligation at the time the funds were obligated. In July 2016, in response to our inquiry, USAID prepared a memorandum to attest that the original obligation was approximately $1.3 million and funded technical assistance for Ebola preparedness to 13 African countries. In another instance, USAID reimbursed $500,000 for an obligation incurred for the preparation of laboratories to detect Middle East Respiratory Syndrome (MERS) and other acute respiratory infections in the Africa region. However, USAID was unable to provide us documentation showing how the obligation funded activities related to the Ebola outbreak. Because USAID did not have the legal authority to make the reimbursements that were not in accordance with the reimbursement provisions in the Act, these 21 reimbursements represent unauthorized transfers. Further, the Antideficiency Act prohibits an agency from obligating or expending more than has been appropriated to it. As of October 2016, USAID had not determined whether it had the budget authority to support the obligations against the accounts that were improperly reimbursed. See appendix V for a complete list of USAID’s reimbursements for obligations incurred prior to the fiscal year 2015 appropriation. The concerns we identified with the 21 reimbursements could partly be attributed to the fact that USAID has not developed written policies or procedures to guide staff on appropriate steps to take to make reimbursements and document them. Internal control standards require that management design control activities—actions established through policies and procedures—to achieve objectives and respond to risks and implement such activities through policies. By developing written policies and procedures, offices and bureaus would know what steps they should take to ensure that reimbursements are made in accordance with an appropriation act and other applicable appropriations laws and are recorded in a manner that allows documentation to be readily available for examination. Such written policies and procedures could provide USAID with reasonable assurance that it is complying with reimbursement provisions and help mitigate the risk of noncompliance. Without written policies and procedures, USAID does not have a process in place for making reimbursements and maintaining documentation to show that it complied with the reimbursement provisions of the Act. Our review found that USAID made 21 reimbursements that it did not have the legal authority to make, possibly resulting in the obligation or expenditure of funds in excess of appropriations, which would violate the Antideficiency Act. In addition, because USAID did not have written policies and procedures for maintaining documentation of the reimbursements, USAID officials spent several months locating records to demonstrate to us that USAID had reimbursed 250 obligations in a manner consistent with the provisions of the Act. USAID officials also noted that because they did not document the purpose of some obligations at the time the funds were obligated, USAID officials had to create a memorandum to explain how two reimbursed obligations funded USAID activities to prevent, prepare for, and respond to the Ebola outbreak. West Africa has experienced the largest and most complex Ebola outbreak since the virus was first discovered, resulting in more than 11,000 deaths. The outbreak not only disrupted the already weak health- care systems in Guinea, Liberia, and Sierra Leone but also contributed to reduced economic activity, job loss, and food insecurity in these countries. USAID and State have funded a range of activities to control the outbreak and will continue to fund longer-term efforts to mitigate the second-order effects and strengthen global health security. The Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, authorized the reimbursement of obligations incurred prior to enactment to prevent, prepare for, and respond to the Ebola outbreak. While most of USAID’s reimbursements were made in accordance with the reimbursement provisions of the Act, roughly 15 percent of the funds that USAID reimbursed were not. In particular, USAID did not in all cases reimburse the same appropriation accounts from which it obligated funds as required by the Act, and in several instances it made reimbursements for obligations for which it did not document that it incurred the obligation to prevent, prepare for, and respond to the Ebola outbreak. As a result, USAID may not have the budget authority to support the obligations against the accounts that were erroneously reimbursed, which may result in a violation of the Antideficiency Act. Furthermore, without written policies and procedures, USAID risks the possibility of noncompliance recurring should USAID be granted reimbursement authority in the future. To ensure that USAID reimburses funds in accordance with section 9002 of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, we recommend that the Administrator of USAID take the following three actions: 1. Reverse reimbursements that were not made to the same appropriation account as the account from which USAID obligated the funds. 2. Reverse reimbursements for which there is no documentary evidence that the obligation was incurred to prevent, prepare for, and respond to the Ebola outbreak. 3. Determine whether reversing any of these reimbursements results in the obligation of funds in excess of appropriations in violation of the Antideficiency Act and, if so, report any violations in accordance with law. To help ensure that USAID complies with reimbursement provisions that may arise in future appropriations laws, we recommend that the Administrator of USAID develop written policies and procedures for the agency’s reimbursement process. We provided a draft of this report to USAID and State for comment. In its written comments, reproduced in appendix VI, USAID agreed with our findings and recommendations. USAID noted that it would reverse reimbursements that were not made to the same appropriation account as the account from which USAID originally obligated funds, and that it is reviewing whether it has violated the Antideficiency Act. State did not provide formal comments. USAID and State also provided technical comments, which we incorporated throughout the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the U.S. Agency for International Development, the Department of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act), included a provision for us to conduct oversight of the U.S. Agency for International Development’s (USAID) and the Department of State’s (State) activities to prevent, prepare for, and respond to the 2014 Ebola outbreak and reimbursements made. We examined (1) USAID’s and State’s obligations and disbursements for Ebola activities and (2) the extent to which USAID made reimbursements in accordance with the requirements of the fiscal year 2015 appropriations act. To examine USAID’s and State’s obligations and disbursements for Ebola activities, we analyzed each agency’s allocation, obligation, and disbursement data for Ebola response and preparedness activities reported in USAID’s and State’s reports to the Senate and House Committees on Appropriations mandated by the Act. We analyzed USAID’s and State’s reported data on allocations, obligations, and disbursements that USAID and State reported from February 3, 2015, to July 1, 2016. We obtained and analyzed the data from USAID for the International Disaster Assistance; Global Health Programs; Economic Support Fund; USAID Operating Expenses; Nonproliferation, Antiterrorism, Demining, and Related Programs; and USAID Office of Inspector General appropriation accounts. We obtained and analyzed the data for the Diplomatic & Consular Programs account from State, which State reported separately. We reviewed each agency’s reports to congressional committees to determine the activities funded and reasons for any changes in the agencies’ cost estimates and use of the funds. We interviewed USAID and State officials in Washington, D.C., from each bureau and office that administers funding from accounts of the fiscal year 2015 appropriation to discuss the status of the agencies’ obligations and disbursements for activities to prevent, prepare for, and respond to the Ebola outbreak in West Africa, methods for reporting the funding data, and plans for obligating and disbursing the funds. We also interviewed officials from each of USAID’s and State’s offices that report on the funds to obtain information about their budgeting process and terms to determine the best method for analyzing the data across accounts. We then reviewed the data and information reported and consulted with USAID and State officials on the accuracy and completeness of the data and information. When we found data discrepancies, we contacted relevant agency officials and obtained information from them necessary to resolve the discrepancies. To assess the reliability of the data provided, we requested and reviewed information from agency officials regarding the underlying financial data systems and the checks and reviews used to generate the data and ensure its accuracy and reliability. To further ensure the reliability of the data, we obtained from USAID officials data from the financial data system that USAID used to report obligation and disbursement data and compared them to a sample of the data that we analyzed. We determined that the data we used were sufficiently reliable for our purposes of examining USAID’s and State’s obligations and disbursements of the funds. To examine the extent to which USAID made reimbursements in accordance with the Act, we reviewed USAID’s reports to the Senate and House Committees on Appropriations mandated by the Act to determine the obligations that the agency incurred prior to enactment and the reimbursements that USAID made for the obligations. We obtained data and documentation from each USAID bureau and office that administered the appropriation accounts for the obligations incurred prior to the fiscal year 2015 appropriation. To assess the reliability of the data provided, we requested and reviewed information from USAID officials regarding the underlying financial data systems and the checks and reviews used to generate the data and ensure its accuracy and reliability. We determined that the data we used were sufficiently reliable for our purposes of examining USAID’s reimbursements. We analyzed these data and reviewed documents to determine the dates and amounts of the obligations, the activities that the obligations funded, and the appropriation accounts used for the obligations. We also analyzed the data and reviewed documents to determine the amounts of the reimbursements, the appropriation accounts from the fiscal year 2015 appropriation that USAID used for the reimbursements, and the appropriation accounts to which USAID reimbursed the funds. We reviewed USAID’s congressional notifications and award documentation to determine the amounts, purposes, and appropriation accounts for the obligations that USAID incurred prior to the enactment of the Act as well as the amounts and appropriation accounts from which USAID reimbursed the funds. We interviewed USAID and State officials about the extent to which the agencies incurred obligations prior to the appropriation and reimbursed the funds. We also interviewed USAID officials and reviewed USAID’s policies for the management of funds to determine the extent to which the agency had developed and implemented written policies or procedures for the reimbursement process. We assessed the extent to which the obligations and reimbursements were consistent with the provisions of the Act as well as the extent to which the procedures USAID used met relevant internal control standards. We conducted this performance audit from June 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of July 1, 2016, the U.S. Agency for International Development (USAID) and the Department of State (State) had obligated the largest share of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act) Ebola funding for activities to control the outbreak (Pillar 1). USAID had obligated lesser amounts of funding for activities to mitigate second-order impacts of Ebola (Pillar 2) and activities to strengthen global health security (Pillar 4). USAID obligated a minimal amount of funding for building coherent leadership and operations (Pillar 3). For all strategy pillars, the largest percentage increase in obligations occurred between April 1, 2015, and July 1, 2015 (see fig. 9). The U.S. Agency for International Development (USAID) obligated funding from the International Disaster Assistance (IDA) and Economic Support Fund (ESF) accounts for almost all of its country-specific Ebola funding. More than 80 percent of the funding that USAID has obligated for Ebola activities in Guinea, Liberia, and Sierra Leone is from the IDA account. Since USAID began its reporting to Congress on Ebola funding in February 2015, IDA obligations for Liberia show the greatest increase between February 3, 2015, and July 1, 2015, and ESF obligations for Liberia show the greatest increase between April 1, 2015, and October 1, 2015 (see fig. 10). ESF disbursements do not begin before July 1, 2015. USAID deobligated and reprogrammed IDA funding for Liberia between January 1, 2016, and July 1, 2016, as it needed fewer funds to control the outbreak. Consistent with Liberia, since USAID began reporting to Congress on its Ebola funding in February 2015, USAID obligations of IDA funding for Sierra Leone show the greatest increases between February 3, 2015, and July 1, 2015. However, USAID made no obligations of ESF funding before April 1, 2015, and no disbursements of ESF funding until after July 1, 2015 (see fig. 11). Since USAID began reporting to Congress on its Ebola funding in February 2015, the greatest increase in obligations from the IDA account for Guinea occurred between February 3, 2015, and July 1, 2015 (see fig. 12). The International Disaster Assistance (IDA) account represents the majority of funding obligated and disbursed for Ebola activities to control the outbreak. After the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, obligations and disbursements from the IDA account increased the most between April 1, 2015, and July 1, 2015 (see fig. 13). The U.S. Agency for International Development (USAID) deobligated and reprogrammed IDA funding between January 1, 2016, and July 1, 2016, as it needed fewer funds to control the outbreak (see fig. 13). The Office of U.S. Foreign Disaster Assistance (OFDA) programmed the majority of IDA funding. The Economic Support Fund (ESF) account represents the majority of funding obligated and disbursed for activities to mitigate the second-order impacts of Ebola. USAID increased obligations of ESF significantly between April 1, 2015, and October 1, 2015, as new cases of Ebola decreased. Disbursements of ESF funding did not increase significantly until after July 1, 2015 (see fig. 14). USAID’s Bureau for Global Health and Global Development Lab programmed the majority of ESF funding. The Global Health Programs (GHP) account represents the majority of funding obligated and disbursed for activities to strengthen global health security. USAID did not obligate any GHP funding between February 3, 2015, and April 1, 2015, and did not report disbursements until August 1, 2015, as USAID intends GHP to fund longer-term activities to respond to future outbreaks (see fig. 15). The Bureau for Global Health programs almost all GHP funding. The U.S. Agency for International Development (USAID) made 271 reimbursements for the approximately $401 million total obligations that USAID’s Bureau for Global Health; Office of Food for Peace (FFP); Office of U.S. Foreign Disaster Assistance (OFDA); and the missions in Guinea, Liberia, and Senegal incurred prior to the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act). Of the 271 reimbursements, USAID made 250 reimbursements (totaling about $341 million) in accordance with the reimbursement provisions of the Act. USAID made these 250 reimbursements for funds that OFDA and FFP obligated prior to enactment of the Act. However, USAID made 21 reimbursements (totaling over $60 million) that did not accord with the reimbursement provisions of the Act. USAID made these 21 reimbursements for funds that the Bureau for Global Health, FFP, and the missions obligated prior to enactment of the Act. Tables 7 through 9 provide information about these 21 reimbursements and the requirements of the Act that each of the reimbursements did not meet. In addition to the contact named above, Valérie L. Nowak (Assistant Director), Bradley Hunt (Analyst-in-Charge), Ashley Alley, Bryan Bourgault, Debbie Chung, Neil Doherty, Rachel Dunsmoor, Jill Lacey, Amanda Postiglione, and Matthew Valenta made key contributions to this report.
In March 2014, the World Health Organization reported an Ebola outbreak in West Africa and, as of June 2016, reported that the outbreak had resulted in more than 11,000 deaths in Guinea, Liberia, and Sierra Leone. USAID and State initially funded Ebola activities using funds already appropriated. In December 2014, Congress appropriated approximately $2.5 billion to USAID and State, in part, for international efforts to prevent, prepare for, and respond to an Ebola outbreak and mandated that the agencies report periodically on their use of the funds. Congress also allowed the agencies to reimburse accounts for obligations incurred for Ebola activities prior to the fiscal year 2015 appropriation. The Act also included a provision for GAO to conduct oversight of USAID and State activities to prevent, prepare for, and respond to the Ebola outbreak. This report examines (1) USAID's and State's obligations and disbursements for Ebola activities and (2) the extent to which USAID made reimbursements in accordance with the fiscal year 2015 appropriations act. GAO analyzed USAID and State funding, reviewed documents on Ebola activities, and interviewed agency officials. As of July 1, 2016, the U.S. Agency for International Development (USAID) and the Department of State (State) had obligated 58 percent and disbursed more than one-third of the $2.5 billion appropriated for Ebola activities. In the early stages of the U.S. response in West Africa, USAID obligated $883 million to control the outbreak, and State obligated $34 million for medical evacuations, among other activities. Subsequently, the United States shifted focus to mitigating second-order impacts, such as the deterioration of health services and food insecurity, and strengthening global health security. Accordingly, USAID obligated $251 million to restore health services, among other activities, and $183 million for activities such as strengthening disease surveillance, while State obligated $5 million for biosecurity activities. Of 271 reimbursements that USAID made for obligations incurred prior to the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act), USAID made 21 reimbursements, totaling over $60 million, that were not in accordance with the Act. These 21 reimbursements represent roughly 15 percent of the $401 million that USAID obligated for reimbursements, of the almost $1.5 billion that had been obligated as of July 1, 2016 (see fig.). For these 21 reimbursements, USAID did not reimburse the same appropriation accounts as the accounts from which it originally obligated the funds, and therefore it did not have legal authority to make these reimbursements. In addition, four reimbursements were for obligations that USAID did not document were for Ebola activities. In reviewing the reimbursements, GAO found that USAID does not have written policies or procedures for staff to follow in making and documenting reimbursements. As a result, USAID does not have a process that could provide reasonable assurance that it complies with reimbursement provisions of applicable appropriations laws, such as the reimbursement provisions in the Act. GAO is making four recommendations, including that USAID should reverse reimbursements not made in accordance with the Act and develop written policies and procedures for its reimbursement process. USAID concurred with GAO's recommendations.
The Army, which has over 250,000 TWVs, generally categorizes the vehicles as heavy, medium, and light. Heavy TWVs represent about 10 percent of the Army’s TWV fleet and include vehicles like the Heavy Equipment Transporter System, which is used to transport main battle tanks and other heavy equipment. Medium TWVs represent about 40 percent of the Army’s TWV fleet and include vehicles for hauling cargo and for launch and support platforms’ weapon systems such as the High- Mobility Artillery Rocket System. Light TWVs represent about 50 percent of the Army’s TWV fleet and currently consist of the HMMWV family of vehicles, which began production in 1983. The Army’s HMMWV program also provides vehicles to satisfy Marine Corps and Air Force requirements. The HMMWV has gone through various upgrades during its nearly 30-year history and has served as DOD’s primary wheeled vehicle for shelter carriers, command and control systems, light cargo and troop carriers, weapons carriers, and ambulances. This report addresses issues related to light tactical vehicles and the M-ATV. The M-ATV is not considered a light vehicle by its weight, but it is being used for functions typically done by light tactical vehicles. In February 2005, Marine Corps combatant commanders identified an urgent operational need for armored tactical vehicles to increase crew protection and mobility of Marines operating in hazardous fire areas against improvised explosive devices, rocket-propelled grenades, and small arms fire. In response, the Marine Corps identified the solution as the up-armored HMMWV. Over the next 18 months, however, combatant commanders continued to call for more robust mine-protected vehicles. The solution to the requirement was the MRAP family of vehicles. MRAPs provide warfighters with platforms capable of mitigating the effects of improvised explosive devices, underbody mines, and small arms fire threats, which are currently the greatest casualty producers overseas. The MRAP family of vehicles consists of four categories: Category I for urban combat missions; Category II for convoy escort, troop transport, explosive ordinance disposal, and ambulance missions; Category III for clearing mines and improvised explosive devices; and the M-ATV for small- unit combat and tactical operations in complex and highly restricted rural, mountainous, and urban areas. The current M-ATV requirement is to rapidly acquire 8,104 vehicles for use primarily in Afghanistan. Delivery orders for production were awarded to a single contractor, Oshkosh Defense. Like the Category I, II, and III variants, vehicles are prepared to accommodate government-furnished equipment during production at Oshkosh with government-furnished equipment integrated at the Space and Naval Warfare Systems Command. In 2007, the Army reported that there were approximately 120,000 HMMWVs in use by the services. The Army also reported that the need for additional armor for current operations, among other things, had pushed the use of the HMMWV far beyond its original purpose and capabilities. The effect of this employment, according to the Army, led to an imbalance in protection, payload, and performance; specifically, protection: light vehicles required the adoption of supplemental armor; payload: the supplemental armor reduced the vehicles’ useable payload (including warfighters, mission equipment, and cargo); and performance: the supplemental armor degraded mobility, reliability, and maintainability, decreased fuel efficiency, decreased vehicle stability and safety, and decreased operational availability. The objective of the JLTV program is to address the HMMWV fleet’s protection, payload, and performance imbalance within a transportable vehicle. JLTV is expected to provide comparable protection to the MRAP vehicles in most cases—the major exception being underbody protection—but with better payload and performance. To guide the future investment decisions related to JLTV and the M-ATV, as well as other TWVs, the Army and Marine Corps have prepared investment strategies, which enable the services to synchronize their respective TWV programs to maximize the use of limited funding. The Army’s most recent strategy was prepared in October 2009 and will be updated later this year. The Marine Corps prepared a TWV strategy in August 2008; an updated strategy is scheduled for release the end of this year. In 2008, the Army and Marine Corps also prepared a joint TWV investment strategy at the request of the Office of Management and Budget. DOD is also committed to preparing a comprehensive and unified strategy and implementation plan for making sound investment decisions for TWVs. In 2007, at the request of the Subcommittees, we were asked to assess the extent to which DOD had developed a DOD-wide TWV investment strategy. We found that DOD did not have such a strategy. To improve DOD’s ability to plan for and manage the development, production, and sustainment of TWVs across the department, we recommended that the Secretary of Defense develop a DOD-wide strategy and implementation plan for making sound investment decisions for TWVs. DOD concurred with our recommendation, stating the following. Upon completion of the ongoing TWV studies by the Army and Marine Corps, and the Analysis of Alternatives for Joint Light Tactical Vehicles, DOD will unite these efforts into a comprehensive strategy that dovetails with the services’ equipping strategies. DOD will endeavor to align requirements, resources, and acquisition strategies into a unified plan for TWV investment decisions. DOD has acquired and fielded the M-ATV as quickly as possible in response to an urgent need. This acquisition was successful on a number of fronts. Multiple vendors responded to the urgency; in June 2009 the government awarded the first production delivery order to a single manufacturer—Oshkosh Defense—and the contractor has consistently delivered vehicles well ahead of schedule. The cost to acquire and field 8,104 M-ATVs is now estimated to be $12.5 billion. As it did for MRAP, DOD chose to begin fielding the M-ATV before it had completed developmental and operational testing due to the urgency of the requirement. Before fielding began in December 2009, however, ballistic testing did establish that the vehicles met the requirements for crew protection and vehicle survivability and automotive testing confirmed that the vehicles were generally safe to operate. No major issues have been identified in subsequent testing and the vehicles appear to be performing well in their operational environments. DOD expedited many M-ATV acquisition decisions because of the urgent nature of the requirement. In September 2008, combatant commanders identified a need for a lighter and more agile version of the MRAP—one that offered at least the same protection as the MRAP but was better suited to the conditions in Afghanistan. In quick succession, the Joint Staff approved the requirement; the Under Secretary of Defense for Acquisition, Technology, and Logistics directed the Navy to procure and test vehicles against the requirement; and the Army released a request for proposal. In early 2009, the Army examined proposals, armor samples, and production representative vehicles and in April awarded fixed-price, indefinite delivery, indefinite quantity contracts to five manufacturers for three additional vehicles for mobility and ballistic testing. In seeking sources for the vehicles, the government placed a strong emphasis on a desire for mature, nondevelopmental vehicles and contractor support for an accelerated program schedule—one of the lessons learned from the MRAP acquisition—according to the program manager. Fully functional prototypes had to be available quickly for test and evaluation. Of the evaluation criteria for awarding a contract, delivery schedule and production capability were second only to the technical performance requirement to provide MRAP-like protection. The first vehicles to be delivered were designated for testing that was necessary to ensure that vehicles were safe to operate before being operated by users. The next priority was to begin delivering vehicles for home station training, which was necessary before the vehicles could be fielded to users. Delivery of vehicles for theater use began shortly after and took place concurrently with delivery and fielding of vehicles for home station training. The government reserved the right to place production orders with multiple contractors as it had for the MRAP, but elected to issue a delivery order for production vehicles to a single contractor—Oshkosh Defense— after the vehicles completed the additional tests in June 2009. The contractor delivered the first 46 vehicles the next month and the first vehicles arrived in theater in December 2009, about 15 months after combatant commanders identified the need. By December, Oshkosh had ramped up delivery to its maximum of 1,000 vehicles per month and had delivered 300 more vehicles than planned in the first 6 months of the effort. The contractor consistently exceeded planned deliveries and has begun to ramp production back down, with planned deliveries ending in November 2010. As of late August 2010, 7,488 vehicles had been delivered to the government and 4,379 vehicles had been fielded to units in Afghanistan. Fielding is expected to be completed in December 2010. DOD took several actions to expedite fielding, including beginning the process of installing vehicle mission equipment during the manufacturing process—a lesson learned from the MRAP production. On MRAP vehicles, some work had to be done on the vehicles after production and before integration could begin, which took time. For example, wiring on the vehicles had to be reconfigured first to accommodate the mission equipment. On the M-ATV, the wiring was configured on the manufacturing line. Figure 1 compares planned and actual deliveries of vehicles to the vehicles fielded through August 2010, and the planned deliveries through November 2010. As of June 2010, the cost to acquire the M-ATV is estimated to be about $12.5 billion through fiscal year 2024. Through fiscal year 2010, all costs continue to be resourced through supplemental funding across all services, except for program management costs—approximately $55 million in fiscal years 2010 through 2015 to be budgeted into baseline service budgets. All other program costs will remain budgeted via annual supplemental requests until the services are able to determine long-term usage plans and add funding to their baseline budgets. It is not currently clear when the services will include MRAP and M-ATV operating and support costs in the base budgets. Table 1 summarizes the estimated acquisition costs. The average unit cost for an M-ATV is about $1.4 million, including the vehicle, mission equipment, and transportation. Mission equipment— which the program office estimates will cost about $4.3 billion—includes communications and situational awareness subsystems. Procurement costs beginning in fiscal year 2011 are for expected upgrades to the vehicles. The estimate includes about $1.6 billion for fiscal years 2011 through 2018 to pay for upgrades to vehicles. For example, an increase in vehicle weight due to the addition of armor could in turn require upgrades to subsystems such as the independent suspension system and door assist. Other upgrades could include additional egress lighting, seat upgrades, and floor upgrades. Conventional DOD acquisition policy reflects a statutory requirement that weapons be tested before they are fully fielded to the user. However, due to the urgency of the requirement, and because the vehicle technologies and designs were mature, DOD began fielding the M-ATV in theater well before it completed the bulk of the planned testing. The test plan included ballistic tests, two phases of developmental tests, and operational test and evaluation. This approach resulted in a high degree of overlap between testing and delivery of the M-ATV. For example, 6,244 vehicles had already been placed on contract and 229 had been fielded before operational testing was completed in mid-December 2009. Developmental testing is scheduled to continue even as fielding is completed in December 2010, according to an Army official. Figure 2 shows the concurrent nature of the overall test plan. DOD’s emphasis that the competing contractors provide only mature, nondevelopmental vehicles was a key element in achieving the M-ATV schedule objectives. According to program officials, source selection testing of all the contractors’ prototype vehicles further mitigated the risk associated with the concurrent M-ATV production and test schedule. Testing evaluated component- and system-level vehicle survivability and crew protection, and included multiple ballistic events against armor samples and candidate vehicles. Source selection also included automotive testing—such as human factors, mobility, braking, and steering—of the prototype vehicles. Before fielding began for any production M-ATVs, additional ballistic testing between October and December 2009 established that the vehicles met the requirements for crew protection and vehicle survivability and automotive testing confirmed that the vehicles were generally safe to operate. The Director, Operational Test and Evaluation, in a June 2010 live-fire operational test and evaluation report, found that the M-ATV was operationally effective in the conduct of unit missions providing armored tactical mobility over Afghanistan terrain and that it demonstrated off-road mobility comparable to the up-armored HMMWV with the Expanded Capability Vehicle additional armor. The Director also found that the M-ATV was operationally suitable, although egress from the vehicle was difficult due to the location of some mission equipment, the height of the front seats, and potential damage to exterior door handles when the vehicle rolls over and there is no other means to open the door from the outside. M-ATV vehicles appear to be performing well in their operational environment, according to program officials who monitor incident reports from the field. Ballistic and automotive performance in the field mirrors that seen in testing. In addition, users report favorably on both aspects of the vehicles’ performance. For example, a recent survey of users in theater found the vehicles are well accepted, have good mobility, and offer protection and survivability that is comparable to the MRAP. On the down side, users reported that visibility is limited and the vehicle is cramped, tall, and heavy. Although the vehicle appears to be performing well in the operational environment, future challenges for the M-ATV include keeping pace with the evolving threat. The vehicle meets the current performance specifications and requirements, but the enemy keeps changing its tactics, according to the acquisition officials. The officials note that as the requirement for protection increases, for example, armor will have to be added, especially underbody—an effort that is already under way. They further note that modifying M-ATV vehicles in Afghanistan will be more difficult than it was to modify the MRAP in Iraq because Afghanistan has less infrastructure and fewer facilities to support it. According to acquisition officials, as the number of M-ATV in theater increases, this will be a challenge. A comparison of JLTV’s capabilities with those of the M-ATV and HMMWV indicates the JLTV is expected to offer protection levels comparable to the M-ATV at a weight nearer to the HMMWV. The JLTV is early in its development and its acquisition cost has not yet been determined, but is expected to be substantial. The JLTV acquisition strategy calls for an incremental, knowledge-based approach, reflective of best practices. The services have recognized transportability and reliability as risks and are considering design and requirement tradeoffs to reduce these risks. For example, to meet the transportation requirement, the services have made the decision to focus on a four-passenger vehicle in the first increment rather than a six-passenger vehicle. Testing of prototypes from each of the contractor teams is under way and is being used to validate and refine requirements and to reduce technical risks prior to the EMD phase. Scheduled to start in late fiscal year 2011, the EMD phase is a crucial point for the program. The Under Secretary of Defense for Acquisition, Technology, and Logistics established several exit criteria that must be completed prior to competitively awarding contracts for the EMD phase. The Army and Marine Corps are pursuing an incremental development approach for JLTV that would feature a basic capability initially—with enhanced force protection, increased fuel efficiency, greater payload, and other improvements to be added in later increments. DOD acquisition policy defines an increment as a militarily-useful and supportable operational capability that can be developed, produced, deployed, and sustained. As we have observed in an earlier report, each increment will need to provide needed capabilities within cost and schedule projections and be justifiable on its own. The services currently plan to acquire 60,383 vehicles in the first increment of JLTV. For that first increment, three categories of JLTV vehicles are being developed with different payload capabilities and expected use. The Category A vehicle is expected to have a payload of 3,500 pounds and is being developed for general-purpose mobility to move reconnaissance and surveillance teams. The Category B vehicle is expected to have 4,000 to 4,500 pounds of payload capability and is being developed to move infantry, weapons, security forces, and tactical command and control. The Category C vehicle is a utility vehicle being developed for combat support missions and is planned to carry shelters and light cargo and operate as an ambulance. Table 2 summarizes the different JLTV categories and subconfigurations. The Army awarded JLTV technology development contracts to three industry teams in October 2008, for a total of $263.7 million; however, a bid protest delayed the actual start of work until February 2009. The 36-month technology development effort features competitive prototyping. It includes 15 months for design and build and 12 months for testing prototype vehicles. Prototype testing is under way at the Aberdeen and Yuma test facilities. The services plan for a full and open competition to select two contractors for the EMD phase and one contractor from those two will be selected for the production phase. DOD is planning the Milestone B decision for JLTV during the fourth quarter of fiscal year 2011, with a contract award scheduled for the first quarter of fiscal year 2012. The EMD phase is scheduled to last almost 3 years and culminate with a Milestone C decision point in late fiscal year 2014. A fixed-price production contract is expected to be awarded in the first quarter of fiscal year 2015. The initial operating capability is expected early in fiscal year 2017. An analysis of alternatives for the JLTV is required to support a Milestone B decision on the program and will be provided to DOD’s Cost Assessment and Program Evaluation group 60 days in advance of the Defense Acquisition Board review. The purpose of the analysis of alternatives is to assess alternatives for capitalizing the fleets of light tactical vehicles currently operated by the four services. At Milestone B, DOD may consider an early start of initial production for a JLTV variant if technical maturity and risks are deemed appropriate. That would assume that JLTV prototypes would be at a high level of technical maturity and their designs very stable. Currently, it is unknown how this would affect the contractor competition plans, which call for two contractors to proceed through EMD. JLTV is being designed to protect its occupants from the effects of mines and improvised explosive devices without sacrificing its ability to carry a payload or its automotive performance, which has not been the case with other TWVs. The improved balance of capabilities expected from the JLTV is summarized in table 3, which compares the capabilities of the M-ATV, the JLTV, and the current generation of HMMWVs. Our comparison of TWV capabilities indicates that the JLTV capabilities are expected to balance personnel protection, payload, and performance, provide defensive measures covering troops while in transport, and increase payload capability. In the performance area, JLTV is intended to be better than the M-ATV and HMMWV in terms of turning radius, cross- country speed, acceleration, minimum range, power generation, and ground clearance. In addition, JLTV operational availability is intended to be better than both other vehicles. The JLTV’s payload, protection, and performance will be equal to or better than the Expanded Capability Vehicle (ECV)/Up-armored HMMWV (UAH) in all categories. Unlike the M-ATV, the HMMWV is and the JLTV is planned to be transportable by helicopter. JLTV’s level of protection is expected to be significantly better than ECV/UAH, especially the underbody protection. The JLTV payload requirement is a minimum of 3,500 pounds for its general-purpose mobility vehicle and 5,100 pounds for its utility vehicle. Those levels are comparable to the M-ATV payload and better than the HMMWV levels. The reliability of the JLTV is projected to be 3,900 miles better between operational mission failures than the M-ATV and more than 2,250 miles better between operational mission failures than the ECV/UAH. Unlike the M-ATV, the JLTV program will need to demonstrate emerging technologies to determine if it can meet its proposed requirements. The services are not directly developing new technologies for JLTV, but are expecting the competing contractors to use new and existing technologies in their prototype vehicles. The purpose of the JLTV technology development phase is to validate the maturity of those technologies and to provide test data to evaluate the technical risks in meeting the proposed requirements for the JLTV family of vehicles. The knowledge gained from the tests will be used to assess the capability of meeting critical requirements such as force protection, mobility, transportability, reliability and maintainability, and technical performance. The Army Test and Evaluation Command is conducting JLTV testing at Aberdeen Proving Ground and Yuma Proving Ground. Ballistic testing of samples of the vehicle armor began in September 2009 and has been completed, but results are not yet available. Ballistic testing of vehicle hulls began in December 2009 at Aberdeen. The three contractors delivered prototype vehicles in May to Aberdeen and Yuma and testing has begun. Vehicle performance testing is being conducted at Aberdeen and reliability, availability, and maintainability testing is being conducted at Yuma. On the basis of the initial analysis of physical characteristics, the majority of the prototypes are within the current limits for vehicle weight. Testing on the prototypes is expected to last about a year and a formal test report will be issued at the end of testing. Interim reports will be issued throughout the year. The technology development phase is currently scheduled for completion in the fourth quarter of fiscal year 2011, to be followed by the Milestone B decision review. According to direction from the milestone decision authority, the following technology development phase exit criteria are expected to be completed for the Milestone B decision point: approval of the appropriate capabilities development document, supported by analysis from technology development work, demonstration of a technology readiness level 6 or higher for all critical technologies in an integrated system, an assessment of commonality across the JLTV family of vehicles, and an assessment of the technical risks relevant to entering initial production. Under best acquisition practices, the JLTV program would need to achieve a match between its requirements and available resources (technologies, money, and schedule) at the Milestone B decision point and before proceeding into the EMD phase. During the remainder of the technology development phase, the task for the JLTV program will be to determine, through the system engineering process, whether the proposed requirements can be met with the resources expected to be available including existing technologies. If that match cannot be demonstrated, the program may have to adjust the requirements to more achievable levels or increase its estimate of resources needed. Fully resolving these questions with knowledge at the Milestone B point is a key to establishing a sound business case for the JLTV program. Such knowledge would include a completed preliminary design review, evidence of robust system engineering analyses, and a technology readiness assessment. Our prior work has consistently shown that a sound business case is a critical determinant of a program’s ultimate success. The JLTV program has identified risks in achieving some of its key requirements and is considering trade-offs in vehicle designs and requirements. Transportability and force protection are key performance parameters for JLTV, but the vehicle’s currently projected weight may preclude its transport by rotary wing aircraft. The vehicle’s weight is driven by the level of force protection expected from JLTV. Force protection is achieved with armor and the currently available armor is heavier than desired. JLTV vehicles are being designed with a basic layer of armor that is integrated with the chassis and additional armor that is bolted on over that basic layer of armor. Transportability by rotary winged aircraft is a critical joint requirement, with the potential that the Marine Corps could withdraw from the program if rotary wing transport of JLTV is not possible. The services are using 15,600 pounds, the lift capacity of the CH-47F, to determine the maximum transport weight of the vehicle. The transport weight differs from the gross vehicle weight in that the add-on armor is removed prior to transport by aircraft. However, the JLTV program is concerned that the transport weight may be in excess of the CH-47F’s lift capacity. In order to meet the weight limit, the services are considering design and requirement trade-offs. Although a six passenger Category B vehicle was the original joint priority, the services are now focusing on a four-passenger vehicle, at least initially. According to program officials, the technology will not have advanced far enough to make armor light enough to build a six-passenger vehicle with the needed protection level and at a transport weight that can be transported by the CH-47F. As a result, the services are proposing to build a four-passenger vehicle in the first increment. A six-passenger vehicle is still under consideration by the Army for the first increment; however, it would not be in the initial procurement. Progress in armor technology development and demonstration—and its impact on JLTV force protection and transportability requirements—will be evaluated throughout the technology development phase and at the Milestone B review. The JLTV requirement for operational availability has been set at 95 percent and is considered a key performance parameter. JLTV reliability is a key system attribute and is an element within operational availability. The original technology development requirement was 4,500 mean miles between operational mission failures for the Category B vehicles and 6,100 miles for the Category A and C variants. This level of reliability may be aggressive since it is two to three times greater than the reliability levels for other tactical vehicles. For example, the M-ATV requirement is 600 miles between failures and at least one version of the ECV/UAH has a reliability of less than 1,300 miles between failures. If the JLTV reliability metrics cannot be met, availability will suffer and total ownership cost, another key system attribute, could increase significantly. While contractor modeling suggests that JLTV will meet its requirements at the subcomponent level, program officials remain uncertain. The services are conducting extensive reliability testing of JLTV prototypes during the technology development phase. However, according to program officials, the reliability requirement is up for reconsideration and may need to be adjusted. Also, the services have a plan to increase the reliability for EMD. JLTV investment to date has been modest—about $300 million for the technology development phase through fiscal year 2010. However, JLTV funding will be expanding in the coming years. From fiscal years 2011 through 2015, an additional $580 million will be needed for JLTV development through the EMD phase. JLTV production funding is currently projected to start in fiscal year 2013. Through fiscal year 2015, the services are projecting JLTV procurement funding of about $2.7 billion. JLTV’s total acquisition costs could ultimately be substantial. The target unit production cost for JLTV ranges from $306,000 to $332,000, depending on vehicle category. That compares to the base M-ATV unit price of about $445,000. The JLTV target cost does not include general and administrative cost and fees. Armor kits and mission equipment packages are also additional, which have not yet been determined. As a reference point, the cost of government-furnished equipment averaged $532,000 per vehicle for the M-ATVs. If similar costs apply to JLTV, its procurement unit cost could be in excess of $800,000. An independent cost estimate will be developed in support of Milestone B decision review; however, the vehicle cost will be substantial when calculating the cost of the first increment. The joint requirement is for 60,383 vehicles to be bought and fielded by fiscal year 2022. Using the lowest projected vehicle cost of $306,000, the cost of 60,383 JLTV vehicles would be almost $18.5 billion and much more when government-furnished equipment and other costs are included. JLTV’s affordability will be a key determination at the Milestone B decision point. The services and DOD will have to balance the cost of the JLTV against other service needs. The cost could determine whether to continue with the program as planned, look at other ways of meeting the requirements, or buy fewer vehicles. The acquisition plans for both the M-ATV and JLTV are consistent with the services’ TWV strategies, which emphasize maintaining a balance of performance, payload, and protection capabilities across their TWV fleets as they continue to adjust to the improvised explosive devices/roadside bomb threats. M-ATV fulfills a short-term, joint, urgent operational need in support of current operations and JLTV is the long-term solution for the joint services to replace the HMMWV. The strategies indicate that the services plan to recapitalize older HMMWVs in the inventory, integrate MRAPs, including M-ATVs, into their respective fleet mixes, and acquire JLTVs if costs and performance goals can be achieved. However, those assumptions could be affected by several forthcoming decisions and events, including those related to (1) the development of revised plans for continued production of new HMMWVs or recapitalization of older HMMWVs; (2) the availability of sufficient funds in the base budgets for the operation and support of those MRAP vehicles to be integrated into the service fleets; and (3) the extent to which the JLTV program can meet cost, schedule, and performance expectations. These events and decisions could change how and when the strategies are implemented as well as the specific composition of the TWV fleets in the coming years. The Army’s October 2009 strategy reiterated that the 2008 Army and Marine Corps joint TWV investment strategy was based on four tenets: Take maximum advantage of existing platforms by recapitalizing their platforms and introducing product improvements. Plan for the integration of MRAP vehicles into the fleet mixes. Emphasize a mixed fleet approach that spans the “iron triangle” of protection, payload, and performance. Transition to a fleet of tactical vehicles that have scalable protection (integrated A-kit and add-on-armor B kits). Both services have also acknowledged that planning uncertainties included JLTV cost and performance, and emphasize the need for the adoption of TWV strategies that are affordable as a whole. The Army strategy states that in an era of constrained financial support and ever- increasing materiel costs, it will work to control cost growth and variant complexity within the TWV fleet. The Marine Corps strategy states that the underlying guidance for the strategy requires the fielding of an affordable fleet of ground combat and tactical vehicles that provide required capabilities and adequate capacity. Each of the underlying tenets, as well as the consideration of affordability, may be affected by one or more forthcoming events and decisions. The first tenet of the respective Army and Marine Corps strategies is that the services would recapitalize existing platforms and introduce product improvements. In fiscal year 2010, the Army planned to reprogram approximately $560 million to the HMMWV Recapitalization program, $13 million to initiate a competitive effort to assess alternative solutions to recapitalize up-armored HMMWVs, and use the remaining funds to support other unspecified Army priorities. The HMMWV Recapitalization program converts older utility HMMWVs into upgraded configurations. Begun in 2004, approximately 30,000 vehicles have been recapitalized at a cost of approximately 35 percent of the value of a new production light-utility vehicle. The strategy also reported that the recapitalized vehicles are “like- new” and will serve the Army for an additional 15 years. In January 2010, the Army also began a pilot program to recapitalize up-armored HMMWVs to the same level of performance and protection as the newest production up-armored HMMWVs. Subject to the approval of the requested reprogramming and the availability of funds, the Army planned to continue the recapitalization of nonarmored HMMWVs and the up-armored HMMWVs to ensure continued sustainment of the HMMWV fleet in the near term, and competitively procure and test prototypes that demonstrate survivability improvements that may be possible on the expanded-capability vehicle HMMWV chassis. However, because the Army’s requested reprogramming action has been denied by the Congress, all recapitalization plans have been suspended pending the development of revised plans for the HMMWV production and recapitalization programs. Until the scope and cost of those plans are clarified, it may be difficult for the services to identify the funds that may be available to support other TWV needs, such as long-term funding for MRAP and M-ATV operations and support, and to define what they can afford in terms of JLTV procurement. The second underlying tenet of the services’ TWV strategies involves the planning for the integration of MRAP vehicles (including the M-ATV) into the fleet mixes. By the end of fiscal year 2010, the Army and Marine Corps will have invested $35.7 billion to acquire the MRAP family of vehicles and most of these vehicles have been delivered to units operating in Iraq and Afghanistan. Also, improved suspension systems have been added on some original MRAP vehicles to maintain acceptable levels of protection while improving automotive performance. In general, DOD considers the MRAP vehicle an important part of the tactical vehicle portfolios, but not one that replaces the need for other vehicles in the force structure, such as HMMWVs or JLTVs. Despite the significant investment in MRAP vehicles, it is not likely that MRAPs would be used to offset quantities of JLTVs, although MRAPs may be able to offset some quantities needed for route clearance, explosives ordinance disposal, and medical evacuation units. In addition, the continuation of operations in Iraq and Afghanistan will influence the quantities of MRAP vehicles available for integration into the services’ force structures, the pace of the integration, and the funding for operations and support costs by the services. The military services continue to refine their plans to integrate the M-ATV—and its predecessor, the MRAP—into the force structure. As of June 2010, the Army had more than 19,800 M-ATV and MRAP vehicles in the theater and in the continental United States. The Marine Corps has a combined MRAP and M-ATV fleet of about 3,300 vehicles. Both services acknowledge the potential operating and support costs as a factor that could affect implementation of their plans. Thus far, most of the cost to acquire, field, operate, and sustain the M-ATV and the MRAP has been funded through supplemental appropriations. However, beginning in 2012, the services will likely assume at least part of the operation and sustainment costs for these vehicles, according to MRAP Joint Program Office officials. While the cost to operate and sustain the vehicles for their expected service life will depend on the military services’ specific plans to integrate the vehicles into their force structures, the MRAP joint program office estimates that the cost to operate and maintain the vehicles through 2024 will be about $10.8 billion. With much of it currently funded out of supplemental appropriations, the services have expressed some concerns about their ability to fund operations and support costs for TWVs in the out years within base budget requests. For example, Marine Corps officials acknowledged that the projected cost to sustain their tactical vehicle fleet contributed to a decision to reduce the quantities of tactical vehicles by 30 percent over the next few years. A senior Army headquarters official told us that the Army would likely be requesting funds for sustainment in its overseas contingency operations budget request for fiscal year 2012 to supplement sustainment in its base budget. The strategies’ third and fourth tenets of rebalancing payload, performance, and scalable protection are intended to be provided by JLTV. Both the Army and Marine Corps TWV investment strategies identify JLTV as an important future capability. Both services have acknowledged that planning is complicated by current JLTV cost and performance uncertainties. The strategies recommend the scheduling of dates within the yet to be developed TWV Modernization Strategy by which current/legacy light tactical vehicles will no longer be procured and the procurement focus switched to JLTV variant production. The Army’s October 2009 strategy identifies the JLTV Force Application vehicles as the objective solution for the armament carrier mission. The strategy states that this payload category is the Army’s number one priority for development and fielding. However, as discussed previously, this category of JLTV vehicles is at risk of not meeting the transportability requirement due to their projected weight and the projected requirement for reliability is two to three times greater than other tactical vehicles. Last year, we reported that DOD does not have a comprehensive TWV investment strategy and concluded that the development of a strategy would be beneficial to DOD and the services in minimizing the potential for unplanned overlap or duplication. In commenting on the report, DOD stated that it plans to prepare a departmentwide strategy for making sound TWV investment decisions that it says will dovetail with the services’ equipping strategies and will endeavor to align requirements, resources, and acquisition strategies into a unified plan. However, as of September 2010, DOD has not yet set a timetable for completing the strategy. In recommending such a strategy and implementation plan in 2009, we noted that DOD should assess and prioritize the capabilities and requirements of similar vehicles needed in the near and long term; estimate the funding, time, and technologies that will be required to acquire, improve, and sustain these systems; balance protection, payload, and performance needs with available resources, especially for light tactical vehicles; and identify contingencies in case there are development problems, delays with key systems, or funding constraints. There are several issues specific to TWVs which will likely influence the DOD-wide strategy. These include the near-term decisions and events related to the continued production or recapitalization of HMMWVs, the operational requirements for MRAPs (to include the M-ATV), and the projected cost and capabilities of the JLTV family of vehicles. In addition, the Secretary of Defense has recently announced several initiatives to free- up funds for modernization and to create efficiencies in programs. This is a contingency that may also have an influence on the DOD-wide strategy by reinforcing the need to minimize the potential for unplanned overlap or duplication. For example, up until now, the services have not considered the vehicles in the MRAP family—with the exception of some vehicles planned for use by route clearance, explosives ordinance disposal, and medical evacuation units—to offset the need for or replace other TWVs. Given the high potential cost of the JLTV, an offset could offer substantial savings, albeit with potential performance tradeoffs. To illustrate, a 5 percent reduction in JLTV quantities could save nearly $2.5 billion, assuming a unit cost of $800,000; a 10 percent reduction could save nearly $5 billion. While the Army and Marine Corps have completed and continue to conduct engineering, cost, funding, and vehicle mix analyses, neither service has performed a formal cost-benefit analysis to consider savings from JLTV offsets or costs of increasing recapitalization of existing vehicles. The M-ATV program has been successful, delivering well-performing vehicles early and at an estimated cost of $12.5 billion. Common to the success of the larger MRAP vehicles produced earlier, the M-ATV program did not have new technologies to develop. JLTV is a well-structured program with desirable features, such as a competitive technology development phase with an incremental approach to fielding increasingly capable vehicles. Unlike the M-ATV and MRAP, JLTV has demanding requirements—to provide MRAP-like protection at HMMWV-like weight— that necessitate technological and engineering advances. It is less certain, then, that at the conclusion of the technology development phase, a JLTV solution will emerge that provides all of the performance required, yet stay within weight limits and deliver the desired high reliability. Before the decision can be made to proceed into the engineering and manufacturing development phase, the precursor to production, DOD may find that tradeoffs are necessary to match JLTV requirements with available resources (including time, money, and technical knowledge). The knowledge of what is achievable with JLTV should be in hand by late fiscal year 2011. Also, by that time, the services should have a better understanding of (1) the scope and cost of the HMMWV recapitalization or production effort, and (2) the placement of the MRAP family of vehicles in the services’ force structures, including funding in base budgets for operating and support costs. To the extent the DOD-wide TWV strategy captures the knowledge gained from these activities, it can provide a sound guide for making near-term decisions that facilitate attainment of longer term objectives. In so doing, the DOD-wide strategy can help reconcile the aggregate affordability and other implications of these programs with the competing demands of the department. Specifically, any potential offsets between the MRAP vehicles and JLTVs, to the extent they are supported by cost-benefit analyses, could save both acquisition and support costs. We recommend that the Secretary of Defense: Enhance the prospects for the successful outcome of the JLTV program by ensuring that the JLTV program clearly demonstrates at the Milestone B decision point that it has achieved a match between its requirements, particularly the transportability and reliability requirements, and available resources (technologies, funding, and schedule) before beginning its EMD phase. Stage the timing of the DOD-wide TWV strategy so that it captures the knowledge gained during the year from the JLTV technology development phase, as well as from the decisions made on the HMMWV and MRAP programs. Include in the strategy a cost-benefit analysis that could minimize the collective acquisition and support costs of the various TWV programs, and reduce the risk of unplanned overlap or duplication. Such cost-benefit analysis should provide an estimate of dollar savings for various options for offsetting JLTV quantities in favor of recapitalizing existing vehicles. In its comments on our draft report, DOD concurred with all of our recommendations. DOD’s response is reprinted in appendix II. On our recommendation that the JLTV program clearly demonstrate that it has achieved a match between its requirements and available resources, DOD concurred and noted that this is a Milestone B requirement. We made the recommendation because the requirements for JLTV transportability and reliability are recognized risks and tradeoffs will be needed to achieve an appropriate balance of requirements and resources at the Milestone B decision point. Such tradeoffs will need to be supported by quantifiable evidence and rigorous analysis. Regarding our recommendation on the DOD-wide TWV strategy, DOD concurred, noting that the DOD-wide TWV strategy will not be limited to the JLTV, HMMWV, and MRAP vehicles and will leverage the Army TWV strategy to be released shortly. We agree that because of the size of its TWV fleet and acquisitions, it seems reasonable for DOD to leverage knowledge from the Army strategy. However, in the end, it is essential for DOD to ensure that the services’ desire to have the most modern, capable TWVs is balanced with fiscal realities. On our final recommendation that calls for cost-benefit analyses to reduce the risk of unplanned overlap and duplication among the TWV programs, DOD concurred and stated that the JLTV program is conducting an Analysis of Alternatives that explores potential offsets to JLTV quantities, including those related to the placement of MRAPs in authorized brigades. DOD also notes that the recapitalization of existing vehicles is a major component of an affordable TWV strategy. We appreciate DOD’s initiatives to manage its TWV portfolio in a balanced and affordable manner. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have jurisdiction and oversight responsibilities for DOD. We will also send copies to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director, Office of Management and Budget. Copies will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report included William Graveline, Assistant Director; Dayna Foster; Danny Owens; Bob Swierczek; Hai Tran; Paul Williams; and Marie Ahearn. To determine the Department of Defense’s (DOD) strategy for and progress in rapidly acquiring and fielding the Mine Resistant Ambush Protected (MRAP) All Terrain Vehicle (M-ATV), we obtained and reviewed program documents and held discussions with system developers, acquisition officials, and contractor representatives. To determine the cost of M-ATV, we obtained and reviewed cost estimates and held discussions with program officials. To determine the challenges that remain for the M-ATV program, we held discussions with system developers, acquisition officials, and technologists. Discussions were held with officials from Department of the Army, Office of the Deputy Chief of Staff, G3/5/7, Arlington, Virginia; Army Tank and Automotive Command, Research, Development and Engineering Center, Warren, Michigan; Defense Contract Management Agency, Oshkosh, Wisconsin; Joint MRAP Vehicle Program Office, Stafford, Virginia; Marine Corps Combat Development Center, Quantico, Virginia; Marine Corps Systems Command, Quantico, Virginia; and Oshkosh Corporation, Oshkosh, Wisconsin. To determine the demonstrated performance of the M-ATV, we obtained and reviewed test reports and held discussions with officials from the Army Test and Evaluation Command, Aberdeen Test Center, Aberdeen, Maryland; and the National Ground Intelligence Center, Charlottesville, Virginia. To determine the expected features of the Joint Light Tactical Vehicle (JLTV) as it moves toward Milestone B in fiscal year 2011, we obtained and reviewed updated program documents and held discussions with Army and Marine Corps acquisition and test officials. Using comparison data provided by Army officials, we prepared a table comparing JLTV’s expected features and cost with those of the M-ATV and current generation High Mobility Multi-purpose Wheeled Vehicle. We held discussions with officials from the Army, Program Manager, Tactical Vehicles, Warren, Michigan; Army, Product Manager, Joint Light Tactical Vehicle, Selfridge Air National Guard Base, Michigan; and the Army Test and Evaluation Command, Aberdeen Proving Ground, Aberdeen, Maryland. To determine the extent to which current plans for M-ATV and JLTV are consistent with Army and Marine Corps tactical wheeled vehicle investment strategies, we obtained and reviewed updated strategies and held discussions with DOD, Army, and Marine Corps officials. Discussions were held with officials from the Under Secretary of Defense, Acquisition, Technology, and Logistics, Arlington, Virginia; Army, Office of the Deputy Chief of Staff, G8, Arlington, Virginia; Assistant Secretary of the Army, Acquisition, Logistics, and Technology, Arlington, Virginia; and Marine Corps Systems Command, Quantico, Virginia. We conducted this performance audit from November 2009 to November 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
The Department of Defense (DOD) is acquiring two new tactical wheeled vehicles (TWV): the Mine Resistant Ambush Protected (MRAP) All Terrain Vehicle (M-ATV) and the Joint Light Tactical Vehicle (JLTV). The $12.5 billion M-ATV is for use in Afghanistan; JLTV is the future replacement for vehicles like the High Mobility Multi-purpose Wheeled Vehicle (HMMWV). GAO was asked to assess (1) DOD's progress in rapidly acquiring and fielding M-ATVs, (2) JLTV's expected features and cost compared to other TWV, and (3) the extent to which the current plans for M-ATV and JLTV are consistent with the services' TWV investment strategies. GAO reviewed documents and held discussions with key officials to determine program strategies, costs, performance, and anticipated features; and compared M-ATV and JLTV plans with service strategies. The M-ATV program has been successful, delivering well-performing vehicles ahead of schedule at an estimated cost of $12.5 billion. No major issues have been identified in testing and early fielding. In developing the M-ATV acquisition strategy, lessons learned from the acquisition of MRAPs in Iraq were applied. Like the earlier MRAPs, the M-ATVs did not require technology development, a key factor in the program's success. As of late August 2010, 7,488 vehicles had been delivered to the government and 4,379 had been fielded to units in Afghanistan. Fielding is expected to be completed in December 2010. The urgent need for these vehicles resulted in their fielding and testing at the same time; however, source selection testing was conducted, and no vehicles were fielded until their safety was verified. Jointly managed by the Army and Marine Corps, JLTV is expected to provide protection levels that are comparable to the M-ATV but without loss of payload or automotive performance. JLTV's acquisition costs are yet to be determined but are expected to be substantial. Unit costs could be over $800,000--somewhat less than M-ATV, with mission equipment making up more than half of the costs. Unlike M-ATV and earlier MRAPs, JLTV has demanding projected requirements that necessitate technological and engineering advances. Key challenges are whether the vehicle can provide the performance and reliability required yet stay within the weight limits for helicopter transport. Difficult tradeoffs in requirements may be necessary. At this point, it is a well-structured program with desirable features like a competitive technology development phase. This phase is scheduled to be completed by late fiscal year 2011, when DOD will decide if the program should enter the engineering and manufacturing development phase. That is the point where JLTV should clearly demonstrate that its projected requirements can be met with available resources. Evidence of that match would include a completed preliminary design review and a technology readiness assessment that shows all technologies to be fully mature. Current plans for M-ATV and JLTV dovetail with the objectives of the most recent Army and Marine Corps investment strategies. The implementation of those strategies, however, will be influenced by (1) the decision to continue producing new HMMWVs, recapitalize the existing HMMWV fleet, or both; (2) long-term funding for MRAP and M-ATV sustainment, and (3) specific cost and capabilities of JLTV. The departmentwide strategy for TWVs that DOD plans to prepare would benefit greatly from the resolution of these issues. To the extent this strategy captures the knowledge gained by the services, the strategy can reconcile the aggregate affordability and other implications of the various tactical wheeled vehicle programs with the competing demands of the department. For example, at this point, the service strategies consider MRAP vehicles to be additive to the force structure, not offsetting quantities of HMMWVs or JLTVs. Any potential offsets between the MRAP vehicles and JLTVs, to the extent they are supported by cost-benefit analyses, could save both acquisition and support costs. GAO recommends that DOD (1) ensure that the JLTV program clearly demonstrates a match between requirements and resources; (2) stage the timing of the DOD-wide TWV strategy so that it captures key knowledge; and (3) include in the strategy a cost-benefit analysis that could minimize the collective acquisition and support costs of the various TWV programs, and reduce the risk of unplanned overlap or duplication. DOD concurred with our recommendations.
Many individuals suffering from advanced chronic obstructive pulmonary disease or certain other respiratory and cardiac conditions are unable to meet their bodies’ oxygen needs through normal breathing. Supplemental oxygen has been clinically shown to assist many of these patients. Medicare’s eligibility criteria for the home oxygen benefit are quite specific. Patients must have (1) an appropriate diagnosis, such as chronic obstructive pulmonary disease; (2) clinical tests documenting reduced levels of oxygen in the blood; and (3) a certificate of medical necessity, signed by a physician, prescribing the volume of supplemental oxygen required in liters per minute and documenting whether the patient needs a portable unit in addition to a home-based stationary unit. Physicians can prescribe a specific type of oxygen system on the certificate of medical necessity, or they can allow the oxygen supplier to decide which type of system best meets a patient’s needs. Currently, there are three methods, or modalities, through which patients can obtain supplemental oxygen: compressed gas, which is available in various sized tanks, from large stationary cylinders to small portable cylinders; oxygen concentrators, which are electrically operated machines about the size of a dehumidifier that extract oxygen from room air; and liquid oxygen, which is available in large stationary reservoirs and portable units. For most patients, each of the three modalities—compressed gas, liquid oxygen, and oxygen concentrator—is clinically equally effective for use as a stationary unit. However, liquid oxygen is most often prescribed for the small proportion of patients that require a very high oxygen liter flow. As shown in table 1, over the past 10 years the use of oxygen concentrators has increased substantially, and the use of compressed gas as the primary, home-based unit is now negligible. At the time of our review, the monthly Medicare fee schedule allowance for a stationary oxygen system was about $285, and it is currently about $300. Medicare pays 80 percent of the allowance, and the patient is responsible for the remaining 20 percent. The Medicare allowance covers use of the equipment; all refills of gas or liquid oxygen; supplies such as tubing; a backup unit, if provided, for patients using a concentrator; and services such as patient assessments, equipment setup, training for patients and caregivers, periodic maintenance, and repairs. In addition to a stationary unit for use in the home, about 75 percent of Medicare home oxygen patients have portable units that allow them to perform activities away from their stationary unit and outside the home.The most common portable unit is a compressed gas tank set on a small cart that can be pulled by the user. Highly active individuals who spend a great deal of time outside the home may use a portable liquid oxygen cylinder or a lightweight gas cylinder, both of which can be carried in a backpack or shoulder bag. These units may be used with an oxygen conserving device to increase the amount of time a single cylinder can be used. The Medicare monthly allowance for portable equipment is currently about $48, regardless of the type of unit. For the period we reviewed, the allowance was about $45. The Balanced Budget Act of 1997 reduced Medicare reimbursement rates for home oxygen by 25 percent effective January 1, 1998, and by an additional 5 percent effective January 1, 1999. Thereafter, the Medicare rates are to be frozen through 2002. The act also requires the Secretary of HHS to undertake a 3-year competitive bidding demonstration project for home oxygen, to be completed by December 31, 2002. Medicare’s monthly fee schedule allowances for home oxygen are much higher than the rates VA pays. As shown in table 2, during the first quarter of fiscal year 1996, Medicare’s monthly fee schedule allowance averaged $320 per patient, including an allowance for a portable unit for the 75 percent of Medicare patients that obtain portables. VA’s average monthly payment, based on all costs for a sample of 5,000 VA patients, was $155. After adding a 30-percent adjustment to VA payments to account for the higher costs associated with servicing Medicare patients, the average VA monthly payment was $200, or almost 38 percent less than Medicare’s allowance of $320. In comparing Medicare and VA payments, we carefully considered all factors that could account for differences in the costs of servicing the two patient groups. Such factors could include clinical characteristics of each patient population as well as differences in how the two programs are administered. Regarding clinical differences, Medicare and VA patients with pulmonary insufficiency must meet the same medical eligibility criteria for home oxygen, and clinical experts and suppliers told us that the home oxygen needs of the two patient groups are essentially the same. We excluded from our analysis the small number of VA patients who were receiving home oxygen for conditions other than pulmonary insufficiency, such as cluster headaches. Utilization patterns for stationary equipment were remarkably similar. Of the 5,000 VA patients in our nationwide sample, about 84 percent used an oxygen concentrator, and 16 percent used stationary liquid oxygen systems. Among Medicare beneficiaries nationwide, 86 percent used oxygen concentrators. In contrast, program differences do affect the costs suppliers incur when serving VA patients, and our analysis included an adjustment to reflect those factors before we compared VA and Medicare’s payment rates. The upcoming reductions in the modality-neutral Medicare payment rates have raised concerns that Medicare patients will have less access to (1) stationary liquid systems, from which patients can refill portables; (2) refills of gas or liquid portable units for patients that have concentrators; and (3) new lightweight, but more expensive, portable systems. In response to these concerns, some groups have proposed changes to Medicare’s modality-neutral payment system. Although stationary liquid oxygen systems are more expensive than concentrators, they enable highly mobile patients to refill their portable liquid units from their stationary reservoirs. This provides these patients greater autonomy and requires suppliers to make fewer deliveries of replacement tanks than are needed for patients using concentrators along with portable compressed gas tanks. The Medicare fee schedule allowance is the same for both stationary liquid systems and concentrators—about $285 per month during the first quarter of fiscal year 1996. During the same period, monthly VA payments averaged about $125 for patients with concentrators and $315 for patients with stationary liquid systems. Yet about 15 percent of both Medicare and VA patients had liquid stationary systems, an indication that the Medicare modality-neutral rates then in effect did not restrict patient access to liquid systems. The upcoming reduction in Medicare payment rates, however, could lead some suppliers to shore up their profits by offering only oxygen concentrators for stationary systems, which would also reduce access to liquid portable refills from stationary units. Most Medicare suppliers now provide relatively few stationary liquid systems or none at all. Of about 6,500 Medicare home oxygen suppliers, about 82 percent obtained 5 percent or less of their Medicare revenues from stationary liquid oxygen systems. Furthermore, almost 25 percent of oxygen suppliers received virtually all of their Medicare revenue from oxygen concentrators. Providing only concentrators allows these suppliers to maximize their profits by avoiding the higher costs associated with stationary liquid as well as with portable units. (Medicare considers the monthly fee for the stationary unit to cover supplies and services for portable units, so providing portable units costs suppliers more.) Since VA acquires home oxygen services under a fee-for-service payment system, VA can ensure that its patients have access to stationary liquid oxygen systems by paying more for them. In addition, VA doctors prescribe the type of system that they feel is most appropriate for their patients. Physicians with Medicare patients can help ensure that they obtain access to the type of oxygen delivery system they need by specifying on the certificate of medical necessity the oxygen delivery system that should be supplied. However, some physicians allow the supplier to decide. Our study included an analysis of the number of Medicare and VA patients that were provided portable units. Even though Medicare paid higher monthly fees to oxygen suppliers than VA, only about 75 percent of Medicare beneficiaries using home oxygen had portable units, while about 97 percent of the VA patients in our sample had portable units. About 1,500 suppliers, or almost 25 percent of all Medicare home oxygen suppliers, provided portable units to no more than 10 percent of their Medicare patients—far below the portable utilization rate of about 75 percent among all Medicare home oxygen beneficiaries. Among patients using compressed gas portable systems, VA patients in our sample obtained about four cylinders per month, while Medicare beneficiaries whose records we reviewed received about two cylinders per month. On the basis of these data, we determined that the lower VA payment rates did not result in less access to portable units or refills. Pulmonary specialists frequently recommend that their patients get as much exercise as possible. Clinicians point out that an overall respiratory therapy regime that includes exercise may slow the deterioration associated with pulmonary insufficiency. According to some experts, an effective exercise program requires portable systems that are lighter and less cumbersome—but more expensive—than the common compressed gas E tanks that are pulled on a small cart. Currently available alternatives are portable liquid oxygen units, which can be refilled from stationary reservoirs at home, or lightweight aluminum gas cylinders, both of which may be used with an oxygen conserving device. Both of these portable systems are small and light enough to be carried in a backpack or shoulder bag, but they are more expensive than the traditional cart-mounted E tanks. Medicare claims data show that of the 363,000 Medicare patients with portable oxygen units in fiscal year 1996, almost 75,000, or about 21 percent, had portable liquid oxygen cylinders. Medicare claims do not identify how many patients with portable gas systems had the traditional E tank or the smaller, lightweight cylinders. Our review of about 550 Medicare patient records indicated that only about 8 percent had lightweight tanks. The National Association for Medical Direction of Respiratory Care has proposed retaining Medicare’s modality-neutral payment for stationary systems but establishing two reimbursement rates for portable units—a lower rate for traditional E tanks and a higher rate for lightweight portable cylinders, which the Association describes as an ambulatory system. The Association proposes that prescribing physicians decide which type of portable system is most suitable for their patients. This approach has also been endorsed by the American Thoracic Society and the American Lung Association. In contrast, others have noted that Medicare’s modality-neutral rate is designed to meet the needs of the entire home oxygen population: Some patients are more expensive to service than others, but the rate is designed so suppliers will make a profit overall. These supporters of the modality-neutral rate also believe that the lack of clinical criteria for deciding which patients need a lightweight ambulatory unit means far more patients will obtain such ambulatory units than will benefit from them. Also, once a patient obtains an ambulatory unit, a lack of adequate controls in the Medicare program could lead to continued payment for the more costly unit when it is no longer needed. Since chronic obstructive pulmonary disease is progressive, a patient’s activity level and the need for a portable or ambulatory system can be expected to eventually decline. However, in our case record reviews, we could not identify any cases where monthly Medicare payments for a portable unit were discontinued for a patient receiving home oxygen. The Balanced Budget Act of 1997 allows HHS to establish separate payment rates and categories for different types of home oxygen equipment, as long as the adjustments are budget neutral. This provides HHS the flexibility to restructure reimbursements to ensure patient access to the equipment and services they need and to reflect market changes and new oxygen delivery technology, which continues to evolve. However, some suppliers, industry experts, and HCFA officials have expressed reservations about abandoning modality-neutral payments, citing the administrative complexity and oversupply of more expensive services that motivated creation of the modality-neutral system. Although Medicare payments for home oxygen include reimbursement for services, HCFA has not specified the type or frequency of services it expects home oxygen suppliers to provide. In contrast, VA encourages its medical centers to contract with suppliers that are accredited by JCAHO or comply with its standards. Even though VA’s reimbursements are less generous than Medicare’s, VA patients received more frequent service visits than the Medicare patients whose records we reviewed. To qualify as a Medicare home oxygen supplier, a company must obtain a supplier number from Medicare’s National Supplier Clearinghouse and follow basic business practices, such as filling orders, delivering goods, honoring warranties, maintaining equipment, disclosing requested information, and accepting returns of substandard or inappropriate items from beneficiaries. Other than these requirements, Medicare has no standards specific to the needs of home oxygen patients. In contrast, VA has both broad accreditation standards and specific contract terms that often define the specific type and frequency of services VA home oxygen patients should receive. VA contracts frequently specify company and personnel qualifications; requirements for staff training, patient education, and development of a patient plan of care; the type and number of patient service visits necessary; required response time for emergencies; and procedures for addressing patient concerns. Many VA contracts also identify the type of equipment to be used, often specifying brand names or equivalents, and equipment repair requirements. To ensure that suppliers comply with the terms of the contract, VA schedules random home visits by VA staff for a minimum of 10 percent of VA patients receiving home oxygen each year. Records we reviewed at oxygen suppliers for about 550 Medicare patients showed that 49 percent of the patients had clinical assessments during a 3-month period, and 30 percent had visits to check and maintain equipment. For the remaining 20 percent, there was no evidence in the suppliers’ records that the patient had been visited within the 3-month period for either a clinical assessment or an equipment check. Similarly, in 1994, the HHS Office of the Inspector General (OIG) reported on the services provided to Medicare home oxygen patients using oxygen concentrators.The OIG found that 17.5 percent of these Medicare patients did not receive an equipment check within a 3-month period, and over 60 percent did not receive any other patient services, such as a clinical assessment. In contrast, we found that 43 of the 46 VA medical centers in our review required the supplier to perform a clinical assessment, an equipment check, or both at least once every 3 months. Of these 43 medical centers, 36 required monthly clinical assessments or equipment checks, and 24 specified that these visits be conducted by respiratory therapists. The remaining three medical centers required that visits be conducted in accordance with oxygen equipment manufacturers’ specifications or in compliance with standards established by JCAHO. VA officials stated that each of these three medical centers had assessments and checks conducted at least once every 3 months. The Balanced Budget Act of 1997 mandates that the Secretary of HHS establish service standards for home oxygen “as soon as practicable.” The act also requires that peer review organizations evaluate access to, and quality of, home oxygen equipment provided to Medicare beneficiaries. Because no definitive national guidelines exist for the most appropriate level of patient support and equipment monitoring services, it is important that HCFA consult with the medical community and equipment manufacturers when developing standards to help ensure that those standards are based on the best available information. Medicare’s reimbursement rates for home oxygen exceed the competitive marketplace rates paid by VA, even after inflating rates by 30 percent to adjust for differences between the two programs. Yet the higher monthly rates Medicare pays appear to purchase the same home oxygen benefits as VA’s lower rates—or even fewer oxygen benefits. About 15 percent of both VA and Medicare patients received the more expensive stationary liquid oxygen systems, rather than concentrators. About 97 percent of VA patients received portable oxygen units, compared with about 75 percent of the Medicare patients. VA patients also received more refills of portable gas tanks and more frequent service visits. And, unlike Medicare patients, VA home oxygen patients benefit from specific home oxygen supplier standards to help ensure that they receive the equipment and services they need. The Balanced Budget Act of 1997 includes provisions that will bring Medicare’s reimbursement rates more into line with the competitive marketplace rates paid by VA. The act also requires HHS to develop specific service standards for home oxygen suppliers that service Medicare patients as well as to monitor patient access to home oxygen equipment. Finally, the act gives HHS the flexibility to restructure the modality-neutral payment, if warranted, to ensure that Medicare patients obtain access to the equipment and services appropriate to their needs. We recommend that the Administrator of HCFA do the following: monitor trends in Medicare beneficiaries’ use of and access to stationary liquid oxygen systems and liquid and gas portables; monitor the availability and costs of new and evolving oxygen delivery systems, including lightweight portable systems and oxygen conserving devices, and work with the medical community to (1) evaluate the clinical benefits associated with the use of such equipment, (2) identify the patient populations most likely to benefit from the use of such equipment, and (3) educate prescribing physicians about existing options in oxygen delivery systems and their right to prescribe the system that best meets their patients’ needs; advise the Secretary of HHS whether a budget-neutral restructuring of the Medicare reimbursement system for home oxygen is needed to provide patient access to the more expensive home oxygen systems, and whether Medicare controls can be implemented to ensure that the use of such systems is limited to patients that can benefit from their use; and work with the medical community, the oxygen industry, patient advocacy groups, accreditation organizations, and VA officials to promptly finalize service standards for Medicare home oxygen suppliers. We provided a draft of this report to the Administrator of HCFA and the Secretary of VA. VA and HCFA officials suggested some technical changes, and we modified the text to reflect their comments. HCFA officials said that they are forming a work group that includes representatives of peer review organizations, the oxygen and health care industries, Medicare contractors, patient advocacy groups, and VA. This work group will develop the protocols for the peer review organizations to follow in their evaluation of access to, and quality of, home oxygen equipment. HCFA officials also stated that it would not be appropriate to establish a separate, higher reimbursement for a specific type of oxygen system, such as liquid portables, unless there were clear clinical criteria defining the medical need for such a system. As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report for 2 days. At that time we will make copies available to other congressional committees and Members of Congress with an interest in this matter, the Secretary of Health and Human Services, and the Secretary of Veterans Affairs. This report was prepared by Frank Putallaz and Suzanne Rubins, under the direction of William Reis, Assistant Director. Please call Mr. Reis at (617) 565-7488 or me at (202) 512-7114 if you or your staff have any questions about the information in this report. To evaluate the appropriateness of Medicare’s reimbursement rates for home oxygen, we considered comparing Medicare’s rates to those paid by Medicaid, private insurance companies, managed care plans, and the Department of Veterans Affairs (VA). All such comparisons have some inherent limitations. After evaluating the alternatives, we decided to use VA’s competitive contracting rates, with some adjustments, for our rate comparisons. We did not use Medicaid payment rates for our comparisons because each state has wide latitude in determining the benefits it covers and its reimbursement rates. Also, since Medicare is the largest single payer of home oxygen benefits, many states base their payment levels on Medicare’s fee schedule. Similarly, we found that private insurance companies use a wide range of methods to establish payment rates. Some firms base their fees on Medicare’s reimbursement levels, while others pay submitted charges or negotiate rates on a case-by-case basis. We found that some private insurers pay more than Medicare and others pay less. We were not able to identify any insurance company with a large number of beneficiaries on long-term home oxygen therapy whose rates could serve as the basis for a nationwide comparison with Medicare’s rates. Nor could we identify any private insurer that had done a study to determine the appropriate reimbursement level for home oxygen services. Furthermore, the coverage criteria for home oxygen varied both from company to company and within the same company depending on the type of coverage purchased by an individual or a group health plan. Medicare managed care plans that we contacted were unwilling to provide us information on the rates they negotiate with oxygen suppliers because they consider that information to be proprietary. However, during our patient file reviews at oxygen suppliers, we identified two Medicare managed care plans that pay about $200 a month for services comparable to those provided to fee-for-service Medicare beneficiaries. Because the availability of these data was very limited, we could not use them for our analysis. benefits to 23,000 patients at a cost of almost $26.5 million. VA’s medical criteria for using supplemental oxygen to treat pulmonary insufficiency are the same as Medicare’s. Further, clinical experts and suppliers told us that the home oxygen service needs of VA and Medicare patients with pulmonary insufficiency are essentially the same. We analyzed claims and charge data compiled by the four Durable Medical Equipment Regional Carriers and the Statistical Analysis Durable Medical Equipment Regional Carrier. These data provided information on how the Medicare home oxygen benefit has grown and how suppliers structure their Medicare billing for the different types of home oxygen systems. The Statistical Analysis Durable Medical Equipment Regional Carrier began compiling national claims data for home oxygen in 1994, so we concentrated on data from the past 2 fiscal years. We supplemented the national Medicare claims data with information from home oxygen suppliers’ records for about 550 Medicare patients. We obtained data on VA payments for home oxygen from original contractor invoices for a nationwide sample of about 5,000 VA patients, drawn from 46 of the 162 VA medical centers that have home oxygen contracts. These 46 VA medical centers included at least one VA medical center from each of VA’s 22 Veterans’ Integrated Service Networks. Since each contract differs, we reviewed the contracts at each of the medical centers in our sample. The invoices we used were for October, November, and December 1995, and they included the cost of equipment rental; oxygen refills; supplies; and services, including the cost of any portable systems and contents provided to the patient. After excluding the relatively small number of patients using stationary gas systems from both patient groups, we found that about 84 percent of VA patients in our study used an oxygen concentrator, and 16 percent used a stationary liquid oxygen system. Among Medicare beneficiaries nationwide, 86 percent used concentrators, and 14 percent used stationary liquid oxygen systems. oxygen refills on the basis of patient use. Other medical centers may incur additional charges for setup and service visits, for example, or for various types of supplies. Since Medicare pays one fee for everything, we “rebundled” the costs incurred by each VA center to compare the total per-patient cost with Medicare reimbursement rates. We excluded from our analysis cases in which VA medical centers provided the supplier with the equipment to be used and only paid the supplier a fee to maintain VA equipment. Further, we did not include the small number of VA patients in our analysis who used only compressed gas because this modality was likely to be used by patients to relieve cluster headaches, a condition not covered by Medicare’s home oxygen benefit. Included in our sample were VA patients who were using an oxygen concentrator or a stationary liquid oxygen system for the treatment of pulmonary insufficiency and who were required to meet the same medical criteria as Medicare patients on home oxygen. To determine if there were any significant geographic differences in costs, we grouped the VA medical centers by the geographic areas served by each of Medicare’s four Durable Medical Equipment Regional Carriers. We found that the average weighted cost for home oxygen for VA medical centers in three of the four geographic areas was within 10 percent of the $155 nationwide average. The average weighted cost for the VA medical centers in the fourth geographic area was 17 percent higher than the nationwide average. This region also had the highest percentage of patients on liquid oxygen, while the region with the lowest average cost had the highest percentage of patients on concentrators. We concluded that the modality mix within a region affected the average price more than geography did. Significant differences between the Medicare and VA programs may account for some of the variation in home oxygen payment rates between VA and Medicare. Most significantly, VA competitively procures oxygen supplies and services, and Medicare does not. Other differences between the programs can place a greater administrative burden on suppliers who service Medicare patients. For example, VA preapproves each patient for home oxygen services, while Medicare requires that oxygen suppliers furnish a certificate of medical necessity completed by a physician before paying the suppliers’ claims. Also, VA patients are not responsible for any copayment; therefore, VA suppliers do not have to bill VA patients for copayments as they do for Medicare patients. In our meetings with home oxygen suppliers and industry representatives, we solicited their views and any data they could provide to quantify the differences in costs between serving VA and Medicare patients. One 1995 industry study estimated that the administrative requirements of Medicare could be accounted for by adding a 15-percent cost differential to the rates VA pays. In other words, the industry study estimated that the rates obtained by VA for home oxygen should be increased by 15 percent before they are compared with Medicare’s rates. However, on the basis of our analysis of the differences between VA and Medicare programs, which are further discussed below, we concluded that a 30-percent adjustment to VA’s payment rates more adequately reflects the higher costs suppliers incur when servicing Medicare beneficiaries. Each VA medical center is responsible for procuring its home oxygen through the competitive bidding process. VA central office policy encourages the medical centers to contract with a supplier that is either accredited by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) or complies with its standards. Within certain guidelines, each center can structure its contract to reflect its own operating philosophy relating to financial management and patient care as well as the local market for home oxygen. Most of the contracts we reviewed were very specific regarding the services they required and even the type of equipment to be provided the patients. The competitive process allows each VA medical center to procure services from the supplier that can deliver the services required at the lowest cost to that medical center. VA’s competitive contracting process is attractive to some suppliers because the volume of patients it can ensure allows for economies of scale. Suppliers have said there are other advantages associated with the local VA contract. For example, winning a VA contract enhances a firm’s reputation and visibility in the local market. In addition, some firms hope to retain their VA patients if they become eligible for Medicare. By contrast, Medicare reimburses all qualifying suppliers for oxygen equipment provided to beneficiaries—it does not directly contract with suppliers; therefore, it cannot guarantee a fixed number of patients to any supplier. When a supplier under a VA contract is told by a VA medical center to provide home oxygen for a patient, the supplier knows that it will be paid for those services. For Medicare patients, the supplier is told by the prescribing doctor to provide oxygen services, generally arranged upon discharge from the hospital. However, it is only after the service is provided that the supplier knows for sure whether Medicare will pay for this service. The industry study noted above quantifies this risk as adding 5 percent to the cost of the VA rate in order for the VA program to serve a Medicare beneficiary. Our analysis of Medicare claims data showed that 18.7 percent of home oxygen claims in the first quarter of fiscal year 1996 were denied. However, most of these denials were for administrative reasons, such as duplicate claims or missing information. The actual denial rate for medical ineligibility was 2 percent. Medicare’s criteria for eligibility are specific and clear cut, and suppliers told us they know if patients are going to qualify for coverage. We concluded that the risk of medically based claims denial is not a major factor in explaining the cost differential between VA and Medicare. However, because this factor results from the different ways home oxygen is authorized in the two programs, we considered it as part of our overall adjustment of VA payment rates. Industry representatives stated that the administrative burden of complying with Medicare requirements accounts for a major portion of the difference between VA and Medicare payment rates. One major burden they cited is the certificate of medical necessity, which must be completed by the prescribing physician before the claim can be submitted to Medicare for payment. Every supplier we interviewed complained about the difficulty in quickly obtaining this document. The industry study estimated that documenting patient eligibility represents 4 percent of the difference between VA and Medicare payment rates. requirements for this expensive, often lifelong benefit should be fairly stringent. Recent changes have reduced the administrative burden by allowing many patients to receive lifetime certification. Also, HCFA recently issued a draft revision of the certificate of medical necessity in an attempt to simplify the form and make it easier for doctors to complete. For example, the revised certificate no longer requires doctors to justify the portable unit. Our review of patient case records showed that, while most certificates are completed within 30 days of service setup, there is documentary support for the suppliers’ contention that there are significant problems with this process. We found several examples of long delays and one case in which a patient died and the doctor refused to fill out the certificate, so the firm was not paid at all for its services. Most suppliers we talked with had developed strategies to facilitate the completion of these certificates. These strategies involved extra staff time and costs: for example, sending a representative to doctors’ offices to request the certificate in person. For the records we reviewed, we found that 64 percent of the certificates were completed within 30 days of the supplier’s starting service and 88 percent were done within 90 days. While obtaining the certificate of medical necessity represents a major start-up cost, the impact on the difference between the monthly VA and Medicare payment rates is less when that cost is amortized over the length of time that the certificate is valid. For most patients, eligibility must be renewed after the first year. At that time, the doctor may certify the patient for lifetime eligibility, and the patient never has to be recertified again. Once a patient’s eligibility is established, Medicare billing is usually electronic and fairly straightforward. One VA contractor we visited noted that the electronic billing process for Medicare is far less cumbersome than submitting paper invoices each month to the local VA medical center. This indicates that the VA system is not entirely without processing costs, although when the medical eligibility documentation is included, Medicare’s overall administrative burden on suppliers is greater. We concluded that the administrative burden for documenting medical eligibility and obtaining Medicare reimbursement is significantly greater than that associated with providing services under a VA medical center contract. Therefore, an adjustment to the VA rate is appropriate for comparison with the Medicare rate. The Medicare home oxygen benefit requires that beneficiaries pay an annual deductible and 20 percent of the allowed reimbursement amount every month. Industry representatives contend that the cost of billing and collecting this copayment adds to the cost of providing services to Medicare beneficiaries. In addition, they point out that a portion of the copayment owed to them may never be collected. The VA program, in contrast, pays 100 percent of the contract price. The industry estimate states that this accounts for 6 percent of the difference between the cost of the VA program and Medicare. Noncollection of copayments does represent a cost differential between VA and Medicare but can only justify a small amount of the difference in payment rates. Our review of case records at the suppliers we visited showed that 86 percent of the Medicare beneficiaries whose records we saw either had supplemental insurance or were covered by Medicaid. Of the 14 percent of beneficiaries with neither private supplemental insurance nor Medicaid coverage, we found that only 3 percent had financial hardship waivers in their records. Even if suppliers were not able to collect copayments from three times the number of patients with hardship waivers, the uncollected amount would represent only 2 percent of the total revenue suppliers receive for Medicare home oxygen. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the appropriateness of Medicare's reimbursement rates for home oxygen, focusing on: (1) its comparison of Medicare and Department of Veterans' Affairs (VA) payment rates; (2) concerns about access to liquid oxygen systems and lightweight portable equipment for patients who are highly active; and (3) standards for the services associated with meeting patients' home oxygen needs. GAO noted that: (1) Medicare's fee schedule allowances for home oxygen exceeded GAO's adjusted estimate for the competitive marketplace rates paid by VA by almost 38 percent; (2) the rate reductions mandated by the Balanced Budget Act of 1997 will bring Medicare's fee schedule more into line with the competitive marketplace costs for home oxygen; (3) concerns have been raised that these reductions could reduce Medicare beneficiaries' access to portable units; (4) under Medicare's modality-neutral payment system, home-based liquid oxygen systems, which patients can use to refill portable units, do not offer suppliers the attractive profit margins associated with lower-cost oxygen concentrators; (5) lightweight, less cumbersome portable systems, which may increase patient mobility, are more expensive than traditional portable gas cylinders; (6) GAO's analysis shows that VA patients were receiving more portable units and refills than Medicare patients were, even though VA's payment rate, adjusted for comparability, was lower than Medicare's; (7) the upcoming reductions in Medicare allowances may lead some suppliers to provide Medicare patients with the least costly systems available, regardless of their patients' needs; (8) the Department of Health and Human Services (HHS) could use its authority under the recently enacted legislation to establish separate reimbursement rates for oxygen concentrators, liquid systems, regular portable units, and lightweight portable units, as long as the impact on overall Medicare costs is budget neutral; (9) the evolution in technology and costs of oxygen delivery systems--and the clinical indications for initiating and terminating the use of more expensive, lightweight portable units--warrant further examination by HHS and the Health Care Financing Administration (HCFA) before deciding whether Medicare's reimbursement system should be restructured; (10) HCFA has not established standards to ensure that home oxygen suppliers provide Medicare patients even basic support services; (11) oxygen suppliers who serve Medicare patients need only comply with basic registration and business requirements associated with obtaining a Medicare supplier number; (12) in contrast, VA encourages its medical centers to contract with suppliers who are accredited by the Joint Commission on Accreditation of Healthcare Organizations or comply with its standards; (13) VA patients typically received more frequent service visits than Medicare patients; and (14) the Balanced Budget Act requires HHS to establish service standards for Medicare home oxygen suppliers.
Once the Transformation Program is completed, USCIS envisions that the new electronic adjudication capabilities and improved information technology will improve agency operations and enable greater data sharing and management of information. USCIS expects the new system, named the USCIS Electronic Immigration System (ELIS), to have features that will allow USCIS to meet its transformation goals of enhanced national security and system integrity, better customer service, and operational efficiency. For example, once USCIS ELIS is implemented, USCIS expects that: Individuals will be able to establish an account with USCIS and file applications over the internet, as well as obtain information on the status of their application.  USCIS will automatically apply risk-based rules to incoming applications to identify potentially fraudulent applications and national security risks.  Adjudicators will have electronic access to applications, as well as relevant USCIS policies and procedures and external databases, to aid in decision making.  USCIS will have the necessary management information to help it allocate workload and measure performance.  USCIS will have electronic linkages with other agencies, such as the Departments of Justice and State, for data sharing and security purposes. Figure 1 depicts the key features of USCIS ELIS, as envisioned, from a USCIS customer perspective. Figure 2 depicts these key features, as envisioned, from a USCIS adjudicator perspective. The Transformation Program intends to design and develop five core business processes to form the foundation of USCIS ELIS and process and manage all applications. Table 1 identifies and briefly describes the five core business processes. USCIS plans to deploy USCIS ELIS in a series of five releases, labeled A through E. Within the first two releases, USCIS ELIS is to be available to USCIS customers applying for nonimmigrant benefit types, followed by immigrant benefits, humanitarian benefits, and citizenship benefits. Much of the functionality needed to operate the five core business processes are to be established during Releases A and B, with an enhanced level of functionality to be added during Releases C through E. Table 2 below shows the order in which ELIS’s five releases are to be deployed and available to USCIS customers. In 2006, USCIS drafted a transformation strategic plan to guide its modernization efforts and established the Transformation Program Office (TPO) to lead and carry out the effort. By 2007, USCIS established a governance structure for the overall management, leadership, decision making, and oversight of the Transformation Program. The TPO governance structure includes three key groups: (1) Transformation Leadership Team (TLT) responsible for the overall program direction and coordination of transformation initiatives within the agency; (2) Program Integrated Product Teams (PIPT) responsible for advising on and approving strategy and performance measures, and overseeing and managing the program, including cost, schedule, and performance; and, (3) Working Integrated Product Teams (WIPT), composed of agencywide representatives with expertise to help define the transformed business processes and its operational aspects. In addition to these key groups, USCIS also holds Program Management Review (PMR) meetings to help manage the transformation effort. Each month, the Transformation Program conducts PMR meetings to assess the status of the overall program and solutions architect activities in terms of cost, schedule, performance, and risk. Major program groups associated with USCIS’s transformation efforts, and the solutions architect, report on the status of activities and deliverables for which they have responsibility. The monthly PMR reports help provide an up-to- date snapshot of top program risks and concerns and how they are being mitigated, as well as the overall status of the program in meeting its milestones, among other information. In November 2008, USCIS awarded a solutions architect contract for approximately $500 million to be allocated over a 5-year period, to design, develop, test, deploy, and sustain the Transformation Program by November 2013. As such, the Transformation Program is USCIS’s largest acquisition and according to USCIS’s current Director “no project is more important to long-term operational improvement and efficiency than Transformation.” USCIS has funded the Transformation Program through both direct legislative appropriations and revenue from applicants’ application fees paid. In fiscal years 2006 and 2007, Congress appropriated a combined total of $71.7 million to fund Transformation Program efforts. Since fiscal year 2007, USCIS’s premium processing fee revenue has been the primary source of funding for the Transformation Program. In addition, the program has used funds from its application fee account to pay for the salaries and benefits of USCIS Transformation Program staff. As shown in Table 3, USCIS spent about $455 million from fiscal years 2006 through 2010, which includes costs incurred by both the solutions architect and USCIS. USCIS estimates it will spend approximately $248 million in fiscal year 2011 for an estimated total cost of about $703 million through fiscal year 2011. In 2003, DHS established an investment review process to help reduce risk and increase the chances for successful acquisition outcomes by providing departmental oversight of major investments throughout their life-cycles. The process was intended to help ensure that funds allocated for investments through the budget process were being spent wisely, efficiently, and effectively. In March 2006, DHS issued Management Directive No. 1400 that defined and updated DHS’s investment review process. The directive required programs to prepare certain documents before transitioning to the next acquisition phase to ensure the program is ready to move to the next phase. To implement more rigor and discipline in its acquisition processes, DHS created the Acquisition Program Management Division in 2007 to develop and maintain acquisition policies, procedures, and guidance as a part of the system acquisition process. In November 2008, DHS issued an interim acquisition directive and guidebook that superseded Management Directive No. 1400, which provided programs guidance to use in preparing key documentation to support component and departmental decision making. In January 2010, DHS finalized the acquisition directive, which established acquisition life-cycle phases and senior-level approval of each major acquisition program at key acquisition decision events during a program’s acquisition life-cycle. This directive established the acquisition life-cycle framework with four phases: 1. identify a capability need (need phase); 2. analyze and select the means to provide that capability (analyze/select phase); 3. obtain the capability (obtain phase); and 4. produce, deploy, and support the capability (produce/deploy/support phase). Each acquisition phase culminates in a presentation to the DHS ARB, which is to review each acquisition at least three times at key acquisition decision events during a program’s life-cycle. Figure 3 presents the four DHS acquisition phases, including the documents presented to ARB and their review as defined in the acquisition directive. The Acquisition Decision Authority—the Chief Acquisition Officer or other designated senior-level official—is to chair ARB and decide whether the proposed acquisition meets certain requirements necessary to move onto the next phase and eventually to full production. The directive outlines the extent and scope of required program, project, and service management; level of reporting; and the acquisition decision authority based on whether the acquisition is considered a major life-cycle cost (estimated at or above $300 million) or nonmajor (life-cycle costs estimated to be below $300 million). DHS considers the USCIS Transformation Program a major acquisition, and as such, the decision authority is the DHS Under Secretary for Management. Following an ARB meeting, the Acquisition Program Management Division is to prepare an acquisition decision memorandum as the official record of the meeting. This memorandum is to be signed by the acquisition decision authority and must describe the approval or other decisions made at the ARB and any action items to be satisfied as conditions of the decision. The ARB reviews provide the department an opportunity to determine a program’s readiness to proceed to the following life-cycle phase. However, we reported in March 2011 that the ARB had not reviewed most of DHS’s major acquisition programs by the end of fiscal year 2009, and the programs that were reviewed had not consistently implemented action items identified in the review by established deadlines. Our prior work has shown that when these types of reviews are skipped or not fully implemented, programs move forward with little, if any, early department-level assessment of the programs’ costs and feasibility, which contributes to poor cost, schedule, and performance outcomes. In June 2011, DHS reported that it was taking action to strengthen its acquisition management processes by reviewing programs on an ongoing basis rather than only at key acquisition decision events, and developing decision-making support tools to aid with oversight. These are positive steps that if effectively implemented should help strengthen its acquisition management processes. USCIS has not consistently followed the acquisition management approach that DHS outlined in its management directives in developing and managing the Transformation Program. Consistent with DHS acquisition policy, USCIS prepared a Mission Needs Statement to justify the need and value of the Transformation Program in pursuing the proposed acquisition. In addition, USCIS identified and analyzed various alternatives for transforming its business processes. However, USCIS did not complete several acquisition planning documents required by DHS policy prior to moving forward with an acquisition approach and selecting a solutions architect to develop USCIS ELIS’s capabilities. The lack of this program documentation contributed to the Transformation Program being more than 2 years behind schedule in its planned initial deployment of USCIS ELIS and increased program costs. In addition, USCIS has not developed reliable or integrated schedules, both of which, under DHS acquisition guidance, are required and essential acquisition management elements. As a result, USCIS cannot reliably estimate when all releases of the Transformation Program will be delivered. USCIS awarded a solutions architect contract to begin capability development activities prior to having a full understanding of requirements and resources needed to execute the program. DHS’s acquisition policy requires that programs conduct planning efforts to establish a program’s operational requirements, to develop a program baseline against which to measure progress, and a plan that outlines the program’s acquisition strategy. These planning efforts are to be documented in three key documents: the Operational Requirements Document, the Acquisition Program Baseline, and the Acquisition Plan. According to DHS policy, these key documents are to be completed before selecting and moving forward with an acquisition approach. According to agency officials, the goal is to help ensure that before committing funds to develop a capability, the program’s operational requirements, cost, schedule, and performance parameters have been fully defined. We have previously reported that firm requirements must be established and sufficient resources must be allocated at the beginning of an acquisition program, or the program’s execution will be subpar. We have also reported that well-defined requirements are critical to ensuring communication about what the government needs from the contractor providing services. However, when the solutions architect contract was awarded in November 2008, one document had not been completed and the other two did not fully address the program’s estimated cost, planned schedule, or performance parameters. Below is a summary of the three planning documents that USCIS did not develop according to DHS policy: Operational Requirements Document (ORD)—According to DHS acquisition policy, this document is to describe the operational mission, objectives, capabilities, and operational user key performance parameters (i.e., the minimum as well as the desired levels of performance that must be met to provide a useful capability to the user) and should be completed before an acquisition approach is selected. However, USCIS did not develop the first version of the ORD until October 2009, almost a year after the award of the solutions architect contract. Program officials acknowledged that an ORD was not prepared prior to selecting an acquisition approach but stated that the solutions architect had sufficient information on the program’s operational requirements to begin work. For example, they stated that the contractor received USCIS’s Enterprise Segment Activity Roadmap (ESAR), which described various activities related to ELIS’s core business process. However, in a February 2009 memorandum, the USCIS Chief Information Officer stated that the ESAR did not provide a realistic capability to guide, constrain, or measure the solutions architect because the business process mappings were incomplete and vague, among other reasons. Acquisition Program Baseline (APB)—This document is to provide cost, schedule, and performance parameters. DHS policy requires it to be prepared prior to selecting an acquisition approach. USCIS completed a draft acquisition program baseline in May 2008 prior to awarding the solutions architect contract. However, the May 2008 APB did not fully address cost, schedule, and performance parameters as required by DHS policy. Regarding cost, the APB estimated the Transformation Program would cost approximately $410.7 million for fiscal years 2009 through the second quarter of fiscal year 2013. However, this estimate only included the estimated contract cost for a solutions architect. According to program officials, the estimate did not include USCIS costs for upgrading its information technology infrastructure, such as upgrading networks and servers or the costs of USCIS Transformation Program personnel and other support contractors, because these costs had yet to be defined. Moreover, USCIS had not yet developed a life-cycle cost estimate, which per DHS acquisition policy, is a source document used to develop the APB’s cost parameters. Regarding the schedule included in the May 2008 APB, it was a high- level view of the program’s key milestones. The acquisition program baseline shows that the program’s start was expected in fiscal year 2009 and the deployment of all benefit types in USCIS ELIS by fiscal year 2013. According to DHS acquisition policy, this high-level schedule is to be based upon a program’s integrated master schedule, a larger and more detailed delineation of program milestones and associated deliverables. However, USCIS did not complete an integrated master schedule prior to contract award. In the absence of an integrated master schedule, program officials were unable to clarify for us how USCIS determined the program’s key milestones, which had USCIS implementing USCIS ELIS from fiscal years 2009 through 2013. Lastly, DHS acquisition policy required that performance parameters be based upon operational requirements. The May 2008 acquisition program baseline captured performance parameters, but they were not based on operational requirements since USCIS had not yet developed operational requirements, as discussed above. Acquisition Plan—This document is to address, among other things, technical, business, management, and other significant considerations affecting the acquisition strategy and contract selection. USCIS developed an acquisition plan in October 2007; however, this document did not address all capabilities for sustaining and maintaining the acquisition, such as certain technical considerations that would affect the acquisition strategy, as required by DHS acquisition guidance. For example, USCIS was to upgrade its information technology infrastructure. However, the October 2007 acquisition plan did not reflect these technical considerations. Moreover, cost information in the acquisition plan is not traceable to other documents, such as a validated life-cycle cost estimate or an acquisition program baseline, as required by DHS guidance. Specifically, the October 2007 acquisition plan presented a $3.4 billion estimated cost for the Transformation Program. According to program officials, the $3.4 billion included information technology costs and covered the life of the program, which is similar to a life-cycle cost estimate. However, the $3.4 billion cost had not been validated as a life-cycle cost estimate by the DHS Cost Analysis Division. Moreover, the May 2008 acquisition program baseline makes no reference to the $3.4 billion cost over the life of the program. However, as required by DHS guidance, the acquisition program baseline is to reflect all cost parameters. According to program officials, the solutions architect contract was performance-based, meaning that USCIS specified the outcomes it was seeking to achieve and gave the solutions architect responsibility for identifying and delivering the assets needed to achieve these outcomes. As a result of this approach, many of the specifics that would affect the program’s cost and schedule were to be determined after the contract was signed. The contract called for the solutions architect to use the 90- day base period from November 2008 to February 2009 to develop a plan to (1) identify work activities to be performed; (2) assign resources to these activities; (3) project the start and completion dates for these activities; (4) provide deliverables to the TPO; and (5) establish performance measures that the contractor and USCIS could use to measure progress. For example, the specific operational requirements and USCIS information technology upgrades that would be needed would depend upon the solutions architect plan. However, as early as 2004— 4 years prior to the solutions architect contract—we reported that this type of acquisition strategy and contracting approach had led to poor acquisition outcomes at DHS. Specifically, we reported that a contracting approach that assigned a U.S. Coast Guard contractor significant responsibilities, such as the identification of work activities and deliverables had been a primary reason for performance, cost, and schedule problems, as it had led to incomplete information about performance and production risks. Program officials emphasized that the work completed during this 90-day base period was done in conjunction with USCIS, which helped to inform the production of these deliverables. Incomplete program planning documents at the start of the program contributed to the delayed deployment of USCIS ELIS, increased costs, and anticipated benefits not being achieved. According to the November 2008 solutions architect contract, the deployment of capabilities was to begin by September 2009 and be completed by 2013. USCIS did not meet the September 2009 deployment milestone. In an April 2009 memorandum to USCIS’s Acting Deputy Director and Chief Financial Officer, the program manager stated that based on the solutions architect’s proposal, the program did not have sufficient staff to provide adequate government oversight of the solutions architect or funding to support the proposed solution “rendering the solution unachievable.” Consequently, the solutions architect contract was modified by scaling back the scope to allow the contractor to focus on work activities necessary to develop the five core business processes. Accordingly, TPO was not authorized to start preliminary design work for Release A until December 2009. By December 2009, TPO proposed deployment of this first release by April 2011. However, as noted in its July 2010 Acquisition Decision Memorandum, TPO experienced delays while defining Release A requirements. These delays resulted in a revised deployment milestone to occur between June and August 2011. Difficulties defining requirements continued and in November 2010, USCIS revised the deployment milestone to December 2011. By January 2011, the requirements had not yet been completed, and by April 2011 USCIS reduced the scope of the first release to meet the newly revised December 2011 deployment time-frame. Operational requirements were completed in April 2011 and approved by the ARB in July 2011, nearly 3 years after the solutions contract was awarded. Table 4 provides information on Transformation Program milestones, status, and acquisition planning postcontract award. Because the acquisition strategy and associated cost parameters were not fully outlined at the start of the program, costs associated with the Transformation Program have increased above the original estimate. The program’s May 2008 acquisition program baseline estimated that the total cost of the program from fiscal years 2009 to 2013 would be $410.7 million. However, the estimated cost through fiscal year 2011 is about $703 million, about $292 million more than estimated in May 2008. This increase in the cost estimate is due to the fact that USCIS’s original planning efforts did not cover the entire program, as required by DHS acquisition planning guidance. For example, the acquisition program baseline did not include USCIS’s information technology enabling costs which, based on data gathered from program officials, totals approximately $618 million and includes activities such as upgrading its technology infrastructure. In addition, the staffing levels have significantly increased from original projections. At the start of the Transformation Program, USCIS had allocated funding for 20 full-time equivalent staff assigned to TPO. As of June 2011, the program had an authorized staffing level of 98. Other costs not planned for have contributed to the program’s overall cost increases. For example, the cost of an operational testing agent, who would be responsible for planning, conducting and reporting independent operational testing and evaluation for Release A, was not included in the acquisition planning process. USCIS officials from TPO and the Office of Information Technology (OIT) agreed that an operational test agent appeared to be a duplicative effort because TPO had already planned to conduct independent testing. However, DHS denied TPO’s request for a waiver of the operational testing agent. As a result, USCIS contracted with an independent operational test agent by October 2010, and as of June 2011, TPO has awarded approximately $1.8 million towards this contract. USCIS’s Transformation Program planned to deploy USCIS ELIS first to USCIS customers applying for citizenship benefits. However, once USCIS defined the costs associated with digitizing (scanning paper documents into an electronic format) existing records following the June 2009 ARB, USCIS concluded that the original plans were not achievable within the associated budget. As a result, in December 2009, USCIS requested— and was authorized by the ARB—to change the order of deployment and begin with the nonimmigrant instead of citizenship line of business. Moreover, according to program officials, from June 2010 to March 2011, USCIS worked to fully define the operational requirements that had not been developed prior to the start of the solutions architect contract. For example, an operational requirement of USCIS ELIS is account set up and intake, which is the ability of USCIS customers to set up accounts and for adjudicators to process them through one, person-centric account. TPO worked with subject-matter experts from USCIS’s field and headquarters offices who were most familiar with the adjudication process to map out steps that were needed to fully define USCIS ELIS’s operational requirements. However, in an ARB meeting held in July 2010, and in a program management review meeting for January 2011, program officials explained that defining operational requirements was taking longer than expected due to the complexity of the rules that needed to be defined in USCIS ELIS, and the review of and agreement to these rules by all stakeholders. For example, one requirement—the account set-up and intake requirement—identified 35 operational functions for USCIS ELIS to perform this action, including set up account online, schedule an appointment, and evaluate any identity discrepancy. To enable completion of the operational requirements needed to move into subsequent phases of development for Release A, USCIS moved approximately 10 percent of the capabilities into the second release. In May 2011, program officials told us they changed the scope of the first release and that full automation of Release A would not be in place in December 2011. Further, only one nonimmigrant benefit would be deployed at that time. Other nonimmigrant benefits were scheduled to be deployed between January and October 2012. DHS has increased its oversight of the Transformation Program since it authorized USCIS to award the solutions architect contract in October 2008. In 2008, we reported that DHS’s investment review process had not provided the oversight needed to identify and address cost, schedule, and performance problems in its major acquisitions, including ensuring that programs prepared key documents prior to moving into subsequent phases of program development. At the time, we made several recommendations aimed at better ensuring DHS fully implemented and adhered to its acquisition review process, including tracking major investments. DHS generally agreed with our recommendations and has since taken actions to improve its acquisition review process, including developing a database to capture and track key program information, such as cost and schedule performance, contract awards, and program risks. The database became fully operational in September 2009. DHS Acquisition Program Management Division officials acknowledged that there was limited oversight of the Transformation Program at the time the contract was signed primarily due to having limited staff to oversee DHS’s programs. These officials further stated that DHS was continuing to develop its acquisition oversight function and had begun to implement the revised acquisition management directive that included more detailed guidance for programs to use when informing component and departmental decision making. Since the contract award, the ARB has met six times to review the Transformation Program’s status. At these meetings, the ARB has directed the TPO to address a number of issues related to cost, schedule, and performance. For example, in June 2009, the ARB held two meetings to discuss risks that had been identified during the 90-day baseline period, such as inadequate staffing levels and delays in delivering required government-furnished items to the contractor. As a result of these risks, the ARB authorized USCIS to move forward with awarding contract options one and two, but restricted the amount that could be expended. The Transformation Program Office was also required to return to the ARB for authorization to award any additional options. According to the ARB Acquisition Decision Memorandum from December 2009, the program had improved its staffing significantly, but issues remained, including the need to fully define system requirements prior to returning to the ARB for authorization to enter into design, development, and testing phases, as noted in the August 2010 ARB Acquisition Decision Memorandum. As a result of this and other outstanding action items, the ARB did not grant the program permission to proceed with development as requested by USCIS at the July 2010 and November 2010 ARBs. Subsequently, in April 2011, the program completed development of operational requirements and the acquisition program baseline. USCIS received departmental approval for both the requirements and acquisition baseline in July 2011, along with approval to proceed with development. DHS Acquisition Program officials stated that USCIS had received approval because they had fully defined operational requirements for Release A, but that USCIS was expected to return to the ARB in December 2011 in order to obtain a decision on whether Release A can be deployed as scheduled at the end of the year. In addition, DHS officials stated that before the ARB approves releases beyond Release A, TPO will need to demonstrate that:  USCIS ELIS’s core business processes work in accordance with its  USCIS can afford to pay for the rest of the program. In several meetings, the ARB has requested that USCIS refine or otherwise provide a complete and documented life-cycle cost estimate for DHS review and validation. USCIS subsequently completed life-cycle cost estimates in September 2009, November 2010, and an updated version in March 2011. This most recent version estimated that the Transformation Program’s life-cycle cost would be approximately $1.7 billion from fiscal years 2006 through 2022. However, as referenced in the life-cycle cost estimate—a planning document—USCIS cannot estimate several work elements because the program does not have required information to estimate complete cost, such as requirements beyond Release A. According to the TPO Program Manager, DHS has reviewed and provided guidance on the development of the life-cycle cost estimate, but it has not yet validated the life-cycle cost estimate as being sound and reasonable. Therefore, at this time, the total expected costs of the program from initiation through completion remain uncertain. Prior to validation of the life-cycle cost estimate, program officials stated that TPO and the DHS Cost Analysis Division are to work closely to ensure the cost estimate is sound and reasonable. According to best practices in cost estimating, an updated life-cycle cost estimate is to, among others, show the source of data. In the case of the Transformation Program, an updated life-cycle cost estimate should show the source of data underlying the software design and cost estimating model, and the equations used to estimate the costs of this large effort. The most recent Acquisition Decision Memorandum dated July 7, 2011, states that the Transformation Program Office must work closely with the DHS Cost Analysis Division to complete a life-cycle cost estimate by September 30, 2011. USCIS is continuing to manage the Transformation Program without specific acquisition management controls such as reliable schedules and as a result it will be difficult for USCIS to provide reasonable assurance that it can meet its future milestones. USCIS has established schedules for the first release of the Transformation Program, but our analysis shows that these schedules are not reliable as they do not meet best practices for schedule estimating. For example, the schedules did not identify all activities to be performed by the government and solutions architect. Additionally, USCIS has encountered a number of challenges in implementing the schedules, such as assumptions that have not been met regarding the time frames that either the solutions architect or USCIS would complete certain tasks. For example, according to an April 2010 program management review, USCIS planned to provide the solutions architect with two technical environments to conduct production and testing activities by December 2010. However, USCIS has since revised the schedule due to challenges in procuring hardware and software needed before these two environments were ready for the solutions architect. Based on the revised schedule, delivery of the technical environments was delayed to April 2011, according to USCIS, so that OIT could take actions to address the delay, such as borrowing equipment until a contract protest was resolved, and providing the solutions architect with a walk-through of the technical environments to ensure it met their needs. Program officials stated that factors outside their control, such as contract protests or review and approval of system requirements, have contributed to challenges in implementing the schedules. According to program officials, defining and developing requirements was expected to last about 2 ½ to 3 months. However, USCIS completed the requirements in 9 months, which included review and validation of these requirements by agency leadership. Program officials stated that detailed reviews and approval by agency leadership took longer than expected. Best practices in schedule estimating state that a comprehensive schedule should include a schedule risk analysis, so that the risk to the estimate if items are delayed can be modeled and presented to management including, among others, assumptions on equipment deliveries or length of internal and external reviews. However, according to a January 2011 program management review, as changes to the program were happening rapidly, there was no analysis completed to determine the impact on the schedule. In addition to the challenges USCIS has encountered in carrying out the schedules as originally planned, on the basis of our analysis we found that the current schedules for the first release of the Transformation Program are of questionable reliability. Best practices state that the success of a large-scale system acquisition program, such as the Transformation Program, depends in part on having reliable schedules that identify:  when the program’s set of work activities and milestone events will  how long they will take; and  how they are related to one another. Among other things, reliable schedules provide a road map for systematic execution of a program and the means by which to gauge progress, identify and address potential problems, and promote accountability. Our research has identified nine best practices associated with developing and maintaining a reliable schedule. To be considered reliable, a schedule should meet all nine practices. In a July 2008 memorandum, DHS’s Under Secretary for Management endorsed the use of these scheduling practices and noted that DHS would be using them. The nine scheduling best practices are summarized in table 5. The Transformation Program has 18 individual schedules. Table 6 summarizes the findings of our assessments of two of these individual schedules as of November 2010 representing the bulk of the Transformation Program efforts and those most critical to the production of USCIS ELIS. Specifically, these schedules track activities associated with USCIS’s OIT and the solutions architect. TPO is responsible for managing key acquisition functions associated with the Transformation Program; thus, USCIS is responsible for tracking and oversight of the OIT and solutions architect’s activities and associated schedules. Appendix I includes a detailed discussion of our analysis. As shown above, the two Transformation Program schedules, for the most part, did not substantially or fully meet the nine best practices. For example, neither the OIT nor the USCIS-approved solutions architect schedule contained detailed information for Release A activities beyond March 2011. In addition, both schedules were missing a significant number of logic links between activities which indicate activities that must finish before others and which activities may not begin until others have been completed. While we cannot generalize these findings to all 18 schedules, our review raises questions about the reliability of the program’s schedules. Based on our discussions with the Transformation Program’s lead program scheduler, this condition stems, in part, from the “aggressiveness of the Transformation Program to implement most of the capabilities within the first 3 years of the program,” and a lack of program management resources for developing knowledge to create and maintain schedules. Moreover, and regardless of the aggressiveness of the solution, our best practices call for schedules to reflect all activities— government, contractors, and any other necessary external parties— essential for successful program completion. As such, neither of the Transformation Program schedules we reviewed substantially met this practice. Furthermore, as demonstrated in the challenges USCIS has encountered in carrying out the schedule as originally planned, not including all work for all deliverables, regardless of whether the deliverables are the responsibility of the government or contractor, may result in confusion among team members and lead to management difficulties because of an incomplete understanding of the plan and of the progress being made. Collectively, and moving forward, not meeting the nine key practices increases the risk of schedule slippages and related cost overruns and makes meaningful measurement and oversight of program status and progress, as well as accountability for results, difficult to achieve. For example, in June 2011, a program management review noted a schedule risk if the development, testing, and deployment process slip again. This could result in USCIS being unable to deliver the first release in December 2011. A schedule risk analysis could be used to determine the level of uncertainty and to help mitigate this risk. Similarly, capturing and sequencing all activities, as outlined in best practices, could help identify the extent to which other activities linked to this schedule risk are affecting its progress. Furthermore, without the development of a schedule that meets scheduling best practices, it will be difficult for USCIS to effectively monitor and oversee the progress of an estimated $1.7 billion to be invested in the acquisition of USCIS ELIS. In August 2011, TPO provided us the updated USCIS-approved solutions architect schedule. Program officials indicated that this updated schedule addressed some areas in which their previous schedule was deficient according to our assessment of the nine scheduling best practices. For example, they said that this schedule included activities through December 2011 rather than only through March 2011. In addition, officials indicated that they have confidence in meeting the December 2011 milestone. Specifically, they said that USCIS and the solutions architect have tested over 70 percent of the Release A capabilities that are to be released in December 2011, and demonstrated these capabilities to the USCIS leadership team in August 2011. On the basis of our analysis of the updated USCIS-approved solutions architect schedule, we determined that the updated schedule did not address many of the deficiencies we identified in the earlier version of the schedule. For example, the USCIS- approved solutions architect schedule did contain activities through December 2011, but logical links were missing between activities indicating which activities must finish before others and which activities may not begin until others have been completed. The schedule’s authorized work has therefore not been established in a way that describes the sequence of work, which prevents the schedule from meeting other best practices, such as establishing a critical path or developing a schedule risk analysis. Thus, the updated USCIS-approved solutions architect schedule does not fully meet all nine key practices, making meaningful measure and oversight of program status and progress difficult to achieve. USCIS did not provide us with an updated OIT schedule; therefore, we were unable to determine to what extent many of the deficiencies we identified in the earlier versions were addressed. Further, USCIS established the Transformation Program as a long-term program made up of five releases to procure, test, deploy, and maintain USCIS ELIS, but USCIS officials confirmed in October 2010 that there was no integrated master schedule for the entire Transformation Program. In addition, the schedules we received in August 2011 were also not integrated into a master schedule. According to best practices, an integrated master schedule is to contain the detailed tasks necessary to ensure program execution and is a required document to develop key acquisition planning documents under DHS acquisition management guidance. Among other things, best practices and related federal guidance call for a program schedule to be programwide in scope, meaning that it should include the integrated breakdown of the work to be performed by both the government and its contractors over the expected life of the program. According to program officials, when the Transformation Program’s planning efforts began, USCIS was unable to develop an integrated master schedule for the Transformation Program due to the complexity of integrating the numerous individual schedules and the lack of skilled staff necessary to develop and manage such an integrated master schedule. In addition, program officials explained that scheduling software to develop and maintain individual schedules was not used by every organization performing transformation work, such as OIT, even though the program issued guidance in August 2010 to all organizations on scheduling best practices, including the use of scheduling software. As an alternative to an integrated master schedule and for ease of reporting to the ARB and other senior officials, TPO developed a high- level tracking tool summarizing dates and activities for the first release of the program and based on individual schedules such as the OIT and solutions architect schedule, which are not directly managed by TPO. According to program officials, in a September 2010 briefing to agency leadership, this high-level tracking tool created capacity for USCIS to analyze the schedule. In this briefing, program officials stated TPO used the high-level tracking tool to ensure coordination and alignment of activities by collaborating with staff responsible for the management of individual schedules. However, this tracking tool is not an integrated master schedule as it does not integrate all activities necessary to meet the milestones for Release A; rather, it is a selection of key activities drawn from the individual schedules maintained by USCIS components and the solutions architect. Moreover, the Transformation Program Manager expressed concern in a May 2011 program management review that the information reported in the high-level tracking tool was not being reported in the individual schedules. In addition, this tracking tool is not an integrated master schedule because it does not show activities over the life of the program. That is, there are no dates or activities for when the other four releases’ set of work activities will occur, how long they will take, and how they are related to one another. As a result, it will be difficult for program officials to predict, with any degree of confidence, how long it will take to complete all five releases of the Transformation Program. It will also be difficult for program officials to manage and measure progress in executing the work needed to deliver the program, thus increasing the risk of cost, schedule, and performance shortfalls. Lastly, USCIS’s ability to accurately communicate the status of Transformation Program efforts to key stakeholders such as its employees, Congress and the public will be hindered. Because USCIS lacks reliable schedules, its ability to develop reliable life-cycle cost estimates is hampered. As outlined by DHS acquisition management guidance, a life-cycle cost estimate is a required and critical element in the acquisition process. USCIS has developed and updated the life-cycle cost estimate for the Transformation Program, but USCIS’s individual schedules for the Transformation Program do not meet best practices for schedule estimating, thus raising questions about the credibility of the program’s life-cycle cost estimates. For example, neither the OIT nor the solutions architect schedule fully captured all activities to be performed by the government and contractor. Therefore, when USCIS is developing the life-cycle cost estimate there is an incomplete understanding of the work necessary to accomplish the five releases of the Transformation Program. Further, in the case of both individual schedules, the absence of a schedule risk analysis makes it difficult for officials to account for the cost effects of schedule slippage when developing the life-cycle cost estimate. Further, a reliable life-cycle cost estimate is essential for helping the program determine how much funding is needed and whether it will be available to achieve the Transformation Program’s goals. Best practices that we have previously identified for cost estimation state that because some program costs such as labor, supervision, rented equipment, and facilities cost more if the program takes longer, a reliable schedule can contribute to an understanding of the cost impact if the program does not finish on time. Meeting planned milestones and controlling costs are both dependent on the quality of a program’s schedule. An integrated schedule is key to managing program performance and is necessary for determining what work remains and the expected cost to complete the work. USCIS’s effort to develop a modern, automated system for processing benefit applications and addressing the many current program inefficiencies has been in progress for nearly 6 years. The program is now more than 2 years behind its planned deployment schedule for implementing the agencywide transformed business process, and given the enormity, significance, and complexity of this transformation, it is essential that USCIS ensures it takes the proper steps for implementation. Although only one benefit type is expected to be available for online account management and adjudication in December 2011, the decision to channel resources and efforts to focus on ensuring the core businesses are ready for a December 2011 launch prior to making other application types available for online-processing appears to be prudent. Moving forward, it is essential that USCIS consistently follows DHS acquisition management guidance to best position the department to develop and share information within the department and with Congress and the public that can be relied upon for purposes of informed decision making. Moreover, ensuring that the program’s schedules are consistent with schedule estimating best practices and integrated through an integrated master schedule would better position USCIS to reliably estimate the amount of time and effort needed to complete the program. Reliable schedules could also assist USCIS in developing and maintaining a complete and reliable life-cycle cost estimate for the program which is essential for helping the program determine how much funding is needed and whether it will be available to achieve the Transformation Program’s goals. To help ensure that USCIS takes a comprehensive and cost-effective approach to the development and deployment of transformation efforts to meet the agency’s goals of improved adjudications and customer services processes, we recommend that the Director of USCIS take the following three actions: 1. Ensure program schedules are consistent with the nine estimating best practices. 2. Develop and maintain an Integrated Master Schedule consistent with these same best practices for the Transformation Program. 3. Ensure that the life-cycle cost estimate is informed by milestones and associated tasks from reliable schedules that are developed in accordance with the nine best practices we identified. We provided a draft of this report to DHS for comment. DHS provided written comments, which are reprinted in Appendix II. In commenting on this report, DHS, including USCIS, concurred with the recommendations. DHS's letter outlined the actions that USCIS is taking action or has taken to address each recommendation. Regarding the first recommendation to ensure program schedules are consistent with best practices, DHS stated that USCIS is incorporating the nine schedule estimating best practices we identified into Transformation Program management reviews, as well as the Acquisition Review Board review. Regarding the second recommendation to develop and maintain an Integrated Master Schedule consistent with these same best practices for the Transformation Program, DHS stated that USCIS will develop an Integrated Master Schedule to depict the multiple tasks, implementation activities, and interrelationships needed to successfully develop and deploy the Transformation Program. Regarding the third recommendation to ensure that life-cycle cost estimates are developed in accordance with the nine best practices, DHS stated that they will refine the Transformation Program life-cycle cost estimate in accordance with GAO’s 12-Step Process for Cost Estimation. In addition, DHS noted that their revised master schedule will clearly identify work elements to ensure a reasonable and cost-effective timeframe for accomplishing the five releases associated with the program. If fully implemented, we believe that the actions that DHS identified will address our recommendations. DHS also provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the Secretary of Homeland Security and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In prior work, we have identified nine best practices associated with effective schedule estimating. These are (1) capturing all activities; (2) sequencing all activities; (3) assigning resources to all activities; (4) establishing the duration of all activities; (5) integrating activities horizontally and vertically; (6) establishing the critical path for all activities; (7) identifying float time between activities; (8) conducting a schedule risk analysis; and (9) updating the schedule using logic and durations. We assessed the extent to which two detailed schedules, Office of Information Technology (OIT) and the solutions architect, dated November 2010, met each of the nine practices. We characterized whether the schedules met each of the nine best practices as follows:  Not met—the program provided no evidence that satisfies any portion of the criterion.  Minimally met—the program provided evidence that satisfies less than one-half of the criterion.  Partially met—the program provided evidence that satisfies about one-half of the criterion.  Substantially met—the program provided evidence that satisfies more than one-half of the criterion.  Met—the program provided evidence that satisfies the entire criterion. Tables 7 and 8 provide the detailed results of our analysis of these schedules. In addition to the contact named above, Mike Dino, Assistant Director; Kathryn Bernet, Assistant Director; and Carla Brown, Analyst-in-Charge; managed this assignment. Sylvia Bascopé, Jim Russell, and Ulyana Panchishin made significant contributions to the work. Nate Tranquilly and Bill Russell provided expertise on acquisition issues. Tisha Derricotte and Jason Lee provided expertise on scheduling best practices. Frances Cook provided legal support. Linda Miller and Labony Chakraborty provided assistance in report preparation, and Robert Robinson developed the report’s graphics. Acquisition Planning: Opportunities to Build Strong Foundations for Better Services Contracts. GAO-11-672. Washington, D.C.: August 9, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Secure Boarder Initiative: DHS Needs to Strengthen Management and Oversight of its Prime Contractor. GAO-11-6. Washington, D.C.: October 18, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008. Defense Acquisitions: Realistic Business Cases Needed to Execute Navy Shipbuilding Programs. GAO-07-943T. Washington, D.C.: July 24, 2007. USCIS Transformation: Improvements to Performance, Human Capital, and Information Technology Management Needed as Modernization Proceeds. GAO-07-1013R. Washington, D.C.: July 17, 2007. Immigration Benefits: Additional Efforts Needed to Help Ensure Alien Files Are Located when Needed. GAO-07-85. Washington, D.C.: October 27, 2006. Information Technology: Near-Term Effort to Automate Paper-Based Immigration Files Needs Planning Improvements. GAO-06-375. Washington, D.C.: March 31, 2006. Immigration Benefits: Improvements Needed to Address Backlogs and Ensure Quality of Adjudications. GAO-06-20. Washington, D.C.: November 21, 2005. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004.
Each year, the Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS) processes millions of applications for immigration benefits using a paper-based process. In 2005, USCIS embarked on a major, multiyear program to transform its process to a system that is to incorporate electronic application filing, adjudication, and case management. In 2007, GAO reported that USCIS was in the early stages of the Transformation Program and that USCIS's plans partially or fully met key practices. In 2008, USCIS contracted with a solutions architect to help develop the new system. As requested, GAO evaluated the extent to which USCIS has followed DHS acquisition policy in developing and managing the Transformation Program. GAO reviewed DHS acquisition management policies and guidance; analyzed transformation program planning and implementation documents such as operational requirements; compared schedule and cost information with GAO best practice guidance; and interviewed USCIS officials.. USCIS has not consistently followed the acquisition management approach that DHS outlined in its management directives in developing and managing the Transformation Program. USCIS awarded a solutions architect contract in November 2008, in effect selecting an acquisition approach before completing documents required by DHS management directives. Specifically, DHS's acquisition policy requires that prior to selecting an acquisition approach, programs establish operational requirements, develop a program baseline against which to measure progress, and complete a plan that outlines the program's acquisition strategy. However, USCIS did not complete an Operational Requirements Document until October 2009, which was to inform the Acquisition Program Baseline and the Acquisition Plan. Consequently, USCIS awarded a solutions architect contract to begin capability development activities prior to having a full understanding of the program's operational requirements and the resources needed to execute the program. GAO has previously reported that firm requirements must be established and sufficient resources must be allocated at the beginning of an acquisition program, or the program's execution will be subpar. The lack of defined requirements, acquisition strategy, and associated cost parameters contributed to program deployment delays of over 2 years. In addition, through fiscal year 2011, USCIS estimates it will have spent about $703 million, about $292 million more than the original program baseline estimate. USCIS expects to begin deployment of the first release of the Transformation Program in December 2011. However, USCIS is continuing to manage the program without specific acquisition management controls, such as reliable schedules, which detail work to be performed by both the government and its contractor over the expected life of the program. As a result, USCIS does not have reasonable assurance that it can meet its future milestones. USCIS has established schedules for the first release of the Transformation Program, but GAO's analysis shows that these schedules are not reliable as they do not meet best practices for schedule estimating. For example, program schedules did not identify all activities to be performed by the government and solutions architect. Moreover, as outlined by DHS acquisition management guidance, a life-cycle cost estimate is a required and critical element in the acquisition process. USCIS has developed and updated the $1.7 billion life-cycle cost estimate for the Transformation Program, but USCIS's individual schedules for the Transformation Program did not meet best practices for schedule estimating, raising questions about the credibility of the program's life-cycle cost estimates. Because some program costs such as labor, supervision, and facilities cost more if the program takes longer, reliable schedules can contribute to an understanding of the cost impact if the program does not finish on time. Collectively, and moving forward, not meeting best practices increases the risk of schedule slippages and related cost overruns, making meaningful measurement and oversight of program status and progress, and accountability for results, difficult to achieve. GAO recommends that USCIS ensure its program schedules and life-cycle cost estimates are developed in accordance with best practices guidance. DHS concurred with GAO's recommendations and outlined the actions that USCIS is taking or has taken to address each recommendation.
Medicare payment policies for technical pathology services have changed over the years as new payment systems for hospital and physician services have been implemented and modified. Beginning with the implementation of the hospital inpatient PPS on October 1, 1983, through the implementation of the Medicare physician fee schedule (MPFS) on January 1, 1992, and the outpatient PPS on August 1, 2000, payment for technical pathology services changed as fixed, predetermined payment replaced reasonable cost or charge-based reimbursement for Medicare services. Under the inpatient PPS, each inpatient stay is classifed into a diagnosis- related group (DRG) based primarily on the patient’s condition. Each DRG has a payment weight assigned to it that reflects the relative cost of inpatient treatment for a patient in that group compared with that for the average Medicare inpatient. Included in the costs of each DRG are nonphysician services provided to inpatients by the hospital and its outside suppliers. A hospital receives a DRG payment from Medicare and a deductible amount from a beneficiary for each inpatient benefit period.Each year, the DRG weights are recalibrated to account for changes in resource use, and the payment rate is adjusted by an update factor to account for changes in market conditions, practice patterns, and technology. Medicare separately pays physicians, including pathologists, and certain other professionals for the direct services they provide to inpatients. When developing the inpatient PPS in the early 1980s, HCFA determined that technical pathology services outsourced to laboratories were an integral part of the professional services provided by the laboratories’ pathologists, not separate nonphysician services. Based on that determination, the payment for technical pathology services provided by laboratories was included in the larger payment to the laboratories and not included in the PPS payments. In 1992, HCFA implemented the MPFS, which created distinct payments for the professional and technical components of most diagnostic services, including pathology services. Although the MPFS included a distinct payment to laboratories for technical pathology services, HCFA did not revise its policy to prohibit laboratories from continuing to receive the separate Medicare payment for outsourced technical pathology services provided to inpatients. Under the MPFS, beneficiaries are responsible for a copayment equal to 20 percent of the payment for physician services, including technical pathology services. Thus, inpatient beneficiaries whose technical pathology services were outsourced by a hospital to a laboratory that received direct payment from Medicare were responsible for a copayment, while other inpatients were not. On July 22, 1999, HCFA proposed ending Medicare payments under the MPFS to laboratories for technical pathology services provided to hospital inpatients on or after January 1, 2000. Under the proposal, laboratories, like suppliers of other nonphysician services, would have to seek payment from hospitals for technical pathology services provided to hospital inpatients. HCFA’s rationale for its proposed rule was that payment for technical pathology services provided to beneficiaries was already included in the inpatient PPS. When implementing the inpatient PPS, HCFA established separate payment rates for rural and urban hospitals based on data from hospitals’ cost reports submitted to the agency. Hospitals that performed their own technical pathology services included such costs in their cost reports, while hospitals outsourcing these services did not. According to HCFA, urban hospitals generally performed such services, and in part, their higher rates reflected that. Consequently, in HCFA’s view, when the separate rural rate was eliminated in 1995 and rural hospitals began receiving the higher rate paid to most urban hospitals, the cost of technical pathology services was included in that payment. Thus, HCFA concluded that when a laboratory received payment from Medicare for technical pathology services provided to a hospital inpatient, Medicare was paying twice for the same service—once to the hospital as part of the PPS payment and once to the laboratory through the MPFS. A second reason HCFA cited to support its proposed rule was concern that hospital outsourcing arrangements with laboratories to provide technical pathology services would proliferate if hospitals realized these arrangements would reduce their costs without any reduction in their inpatient PPS payments. After considering comments from the hospital industry and laboratories, which stated, in part, that they would need additional time to renegotiate their agreements, in the final rule, HCFA delayed implementation of the policy until January 1, 2001. In December 2000, the Congress enacted provisions in BIPA that stated that laboratories furnishing technical pathology services to hospital patients under agreements with hospitals as of the publication date of the HCFA proposed rule could continue to receive payment directly from Medicare for these services until January 1, 2003. Because the outpatient PPS was implemented in August 2000, the provisions applied to services provided to outpatients as well as inpatients. The outpatient PPS pays hospitals a predetermined amount per service similar to a fee schedule. All services paid under the outpatient PPS, including technical pathology services, are classified into groups called ambulatory payment classifications (APC). Like inpatient DRGs, the relative weights of the APCs are adjusted annually by recalibration and the payment rates by an update factor to account for changes in resource use, technology, practice cost, and service delivery. When the outpatient PPS was implemented, beneficiary copayments for a service were generally 20 percent of the hospitals’ median charges for that service in 1996, updated to 1999. Therefore, the beneficiary cost-sharing obligation as a percentage of APC payment rates varies by service. Because the median charges were often higher than the APC payment rates implemented with the outpatient PPS, beneficiary copayments were frequently as high or higher than 50 percent of the total APC payment amount. The Balanced Budget Act of 1997 established a mechanism to gradually decrease the cost-sharing percentages for all APCs to 20 percent over time. The copayments that beneficiaries are responsible for paying under the outpatient PPS for technical pathology services that are furnished directly by hospitals are roughly comparable to the copayments that beneficiaries are responsible for paying laboratories under the MPFS when services are outsourced. The outpatient PPS payment rates for technical pathology services are significantly lower than the corresponding MPFS payment rates, but outpatient PPS copayments represent a higher percentage of the payment for technical pathology services than MPFS copayments. If the BIPA provisions are not reinstated and CMS terminates direct payments to laboratories, hospitals would have to negotiate payment amounts with laboratories to pay them directly for services delivered to inpatient and outpatient beneficiaries or begin to supply these services themselves. While the hospitals would not experience any direct adjustments to their inpatient DRG payments, over time, hospital costs of paying laboratories for technical pathology services would be reflected in the DRG weights, as the annual recalibration accounts for changes in the costs of delivering services. For services delivered to outpatients, hospitals would bill Medicare under the outpatient PPS for technical pathology services and, therefore, would recover additional revenue even if they continued to outsource these services to laboratories. Inpatient beneficiaries of hospitals that outsource technical pathology services would no longer be responsible for additional copayments to the laboratories. Outpatient beneficiaries would no longer be responsible for copayments to laboratories under the MPFS, but instead would be responsible for copayments to the hospitals where they received their services under the outpatient PPS. CAHs, which as of March 2003 constituted 15 percent of all hospitals and 31 percent of rural hospitals, would not be affected by the termination of direct laboratory payments. CAHs are not paid under the inpatient and outpatient PPS, but instead are paid based on their reasonable costs of providing services. Currently, CAHs receive no payment from Medicare for technical pathology services outsourced to laboratories that directly bill Medicare because CAHs incur no costs in the delivery of those services. If direct laboratory payments are terminated, CAHs would be reimbursed by Medicare for their costs of paying laboratories to perform technical pathology services, and outpatient beneficiaries who currently are responsible for paying 20 percent of the payment for their technical pathology services to the laboratories under the MPFS would instead be responsible for paying 20 percent of the CAHs’ customary charges. See table 1 for a description of Medicare payments to outsourcing PPS hospitals and CAHs, and table 2 for a description of beneficiary cost- sharing obligations at outsourcing PPS hospitals and CAHs, under current policy and if direct payment to laboratories is terminated. We estimate that in 2001, 4,773 PPS hospitals and CAHs, representing 95 percent of all such facilities, outsourced at least some technical pathology services to laboratories that received direct payment from Medicare for those services (see table 3). However, most hospitals outsourced a small number of these services to laboratories. In 2001, approximately 1.4 million technical pathology services were outsourced, and the median number of outsourced services per hospital was 81. Approximately 68 percent of all hospitals outsourced 200 or fewer technical pathology services, and only 6 percent outsourced more than 1,000 services. Outsourcing hospitals consisted of 2,428 urban PPS facilities and 1,651 rural PPS facilities, representing 95 percent and 97 percent of urban and rural PPS hospitals in 2001, respectively, and 694 CAHs. Among hospitals outsourcing technical pathology services, urban hospitals, including CAHs, outsourced a median of 97 services and 64 percent of all services, and rural hospitals, including CAHs, outsourced a median of 61 services and 36 percent of all services. Almost twice as many services were delivered to outpatient beneficiaries compared to inpatient beneficiaries, as outpatient services accounted for approximately 64 percent of all outsourced services. If laboratories had not received direct payment for services for hospital patients, we estimate that Medicare spending would have been $42 million less in 2001, with $18 million and $24 million in savings for inpatient and outpatient services, respectively, and overall beneficiary cost sharing would have been reduced by $2 million. In 2001, payments to laboratories providing technical pathology services to beneficiaries who were hospital patients equaled over $63 million, including Medicare payments of about $51 million ($18 million for inpatient services and $33 million for outpatient services) and beneficiary copayments of almost $13 million ($5 million for inpatient services and $8 million for outpatient services). Paying laboratories to provide technical pathology services is unlikely to impose a large financial burden on most hospitals. However, the extent to which an individual hospital’s costs and a laboratory’s revenues would change if direct payment to laboratories is terminated would depend on the rates negotiated by that hospital and laboratory. If payment to the laboratory is made at the MPFS rate, a PPS hospital outsourcing the median number of technical pathology services would incur an additional cost of approximately $2,900. Additionally, there would be no financial impact on CAHs if direct laboratory payment is terminated because they would be reimbursed for their reasonable costs of outsourcing technical pathology services. In 2001, estimated payments to laboratories providing technical pathology services to hospital patients totaled over $63 million, including Medicare payments of about $51 million and beneficiary copayments of almost $13 million (see table 4). For services provided to inpatients, total laboratory payments equaled approximately $23 million, with $18 million from Medicare and $5 million from beneficiaries. For services provided to outpatients, total laboratory payments equaled approximately $41 million, including $33 million from Medicare and $8 million from beneficiaries. If laboratories had not received direct payment for services for hospital patients, we estimate that Medicare spending would have been $42 million less in 2001 (see table 5). The $18 million in inpatient savings would have resulted from Medicare discontinuing payments for technical pathology services to laboratories under the MPFS, while making no additional payments to PPS hospitals for inpatient services. For outpatient services, Medicare would not have paid laboratories directly, but would have paid PPS hospitals under the outpatient PPS. If direct payment to laboratories had been terminated, Medicare would have paid PPS hospitals an estimated $9 million under the outpatient PPS in 2001 for technical pathology services, thus saving $24 million. If laboratories had not received direct payment for services for hospital patients, Medicare beneficiaries would have been relieved of approximately $2 million in cost-sharing obligations (see table 6). In 2001, inpatients at hospitals that outsourced services were responsible for paying laboratories approximately $5 million in copayments under the MPFS. If direct payment to laboratories is terminated, inpatients would make no copayments to laboratories for technical pathology services. We estimate that the cost-sharing obligation of outpatients at PPS hospitals would have increased by $3 million to approximately $11 million under the outpatient PPS if laboratories had not received direct payment, compared to an estimated cost sharing of $8 million under the MPFS. However, outpatients’ cost-sharing obligations for technical pathology services under the outpatient PPS gradually will decline, as mandated by the law. As the percentage declines, beneficiary copayments for technical pathology services under the outpatient PPS should become lower than under the MPFS, as long as payments for these services generally remain lower under the outpatient PPS than the MPFS. If outsourcing hospitals agree to pay laboratories the rates the laboratories currently receive under the MPFS for technical pathology services, these amounts are unlikely to impose a large financial burden on most hospitals. In 2001, a PPS hospital outsourcing the median number of services outsourced by PPS hospitals, 94, would have incurred additional costs of approximately $2,900 in paying a laboratory for technical pathology services, representing a small fraction of hospitals’ annual Medicare revenues. A PPS hospital outsourcing 1,283 services annually—the 95th percentile of outsourced technical pathology service volume in our analysis—would have incurred an additional annual cost of just under $40,000. There would be no financial impact from terminating direct laboratory payments for rural hospitals that are or become CAHs, as CAHs would recover from Medicare their reasonable costs of outsourcing technical pathology services. The extent to which a hospital’s costs and a laboratory’s revenues would change if direct laboratory payments are terminated would depend on the rates negotiated between the two parties. Hospitals’ costs would increase because they would begin paying the laboratories for technical pathology services; laboratories’ revenues would decline if hospitals pay lower rates for the technical pathology services than Medicare currently pays laboratories under the MPFS. Because larger hospitals and those located in urban areas have more purchasing power and may have multiple laboratories from which to choose, these hospitals are likely to fare better than smaller hospitals and those in rural areas. Laboratory officials we spoke with voiced concern that some hospitals would insist that laboratories furnish technical pathology services at no charge or at extremely low rates in exchange for hospitals referring other business to the laboratories and their pathologists. However, these officials also indicated that their laboratories would not perform technical pathology services at no charge or for very low rates. Furthermore, hospitals might be deterred from requesting low rates because of concerns that such arrangements might violate applicable fraud and abuse laws. Although hospitals and laboratories would face new billing costs—both one-time and ongoing—if direct payments to laboratories are terminated, such changes generally would impose a modest additional cost. We spoke with officials from hospitals and laboratories that already have billing arrangements for these services, and they did not report to us that these costs were burdensome. Beneficiaries’ Access Medicare beneficiaries’ access to pathology services is unlikely to be Likely Would Be Unaffected disrupted if direct payments to laboratories are terminated because hospitals are unlikely to limit surgical services, including those requiring pathology services. In addition, hospitals would likely continue to outsource technical pathology services to laboratories because this would generally be less costly than performing these services themselves. Limiting Surgeries Unlikely Representatives of outsourcing hospitals with whom we spoke indicated that their hospitals would not eliminate or restrict surgical procedures if direct payment to laboratories is terminated. Because a large percentage of hospital-based surgeries require pathology services, hospitals would lose an important source of revenue if they restricted surgeries to those not requiring such services. Outsourcing hospitals stated that they could not afford this revenue loss. Rural hospitals, which are often the sole hospitals in their geographic areas, expressed the added concern that eliminating surgical procedures would reduce their communities’ access to medical services. If direct payment to laboratories is terminated, representatives from hospitals that do not maintain pathology laboratories and outsource technical pathology services to laboratories said they would continue to outsource technical pathology services. Few such hospitals have a sufficiently large volume of technical pathology services to make it cost effective to perform such services themselves. For most hospitals, the equipment and personnel expenses associated with maintaining their own pathology laboratories would likely exceed the cost of outsourcing the technical pathology services to laboratories. Hospital officials also stated that they have had difficulty recruiting histotechnicians, and it therefore would be difficult to staff new, or expand existing, pathology laboratories. Termination of direct laboratory payments generally would reduce Medicare expenditures and beneficiary cost-sharing obligations for technical pathology services while having little effect on beneficiaries’ access to these services. While termination of direct laboratory payments would impose a small financial burden on outsourcing PPS hospitals, this change would have no impact on CAHs. As the relative payment weights of services provided under the inpatient and outpatient PPS are adjusted annually, any increased costs hospitals incur to pay laboratories for technical pathology services will, over time, be reflected in the inpatient and outpatient PPS payments. Termination of direct laboratory payments also would eliminate the inequity between beneficiary cost-sharing obligations at different hospitals. In addition, continuing direct laboratory payments is an inappropriate means for providing financial assistance to hospitals. Hospitals, in receiving fixed payment amounts under a PPS and paying suppliers of nonphysician services provided to a Medicare patient from such fixed amounts, have an incentive to provide health care services efficiently. Permitting hospitals to outsource technical pathology services and have laboratories seek payment from Medicare eliminates the incentive for the efficient provision of these services and leads to potential Medicare double payments. We suggest that the Congress may wish to consider not reinstating the provisions that allow laboratories to receive direct payment from Medicare for providing technical pathology services to hospital patients. We recommend that the Administrator of CMS terminate the policy of permitting laboratories to receive payment from Medicare for technical pathology services provided to hospital patients. In commenting on a draft of this report, CMS stated that it is important that payment policy encourage efficiencies in the provision of technical pathology services. CMS stated that it would carefully consider our recommendation and noted that the Congress is currently considering this issue. CMS further stated that it would want to ensure that implementation of the recommendation does not adversely affect rural hospitals. As we noted in the draft report, permitting laboratories to receive payment directly from Medicare for technical pathology services is not an appropriate or efficient mechanism for providing financial assistance to hospitals, as it is contradictory to the objectives of a PPS. In addition, because the median number of technical pathology services annually outsourced by rural hospitals was low, we do not believe that paying laboratories directly for these services will place a significant financial burden on these hospitals. CMS’s written comments are reprinted in appendix II. The agency also provided technical comments, which we incorporated where appropriate. We received oral comments on a draft of this report from the American Hospital Association (AHA), the College of American Pathologists (CAP), and the National Rural Health Association (NRHA). These organizations disagreed with our conclusions, matter for congressional consideration, and recommendation and suggested that direct laboratory payments should continue. Generally, all three organizations expressed concerns about rural hospitals. AHA and NRHA expressed the concern that termination of direct laboratory payments would place a financial burden on rural hospitals, and CAP expressed concern that hospitals, including CAHs, and laboratories would experience an increased administrative burden in changing their current billing practices. CAP also raised a question about whether hospitals and laboratories would be able to successfully negotiate new payment arrangements for outsourced technical pathology services; if not, in its view, beneficiaries’ access to services could be jeopardized. As we noted in the draft report, hospital officials we spoke with, including those from rural hospitals, stated they would continue to offer technical pathology services as a part of their surgical services if they had to pay laboratories directly for technical pathology services. These officials stated that they would not consider eliminating surgeries if they had to enter new, or modify existing, arrangements with laboratories to provide technical pathology services. We acknowledge that modifying their billing practices will impose costs on hospitals and laboratories; however, officials from hospitals and laboratories that already have billing arrangements for technical pathology services did not report to us that these costs were burdensome. We are sending a copy of this report to the Administrator of CMS and appropriate congressional committees. The report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others on request. If you or your staffs have any questions, please call me at (202) 512-7119 or Nancy A. Edwards at (202) 512-3340. Other major contributors to this report include Beth Cameron Feldpush, Jessica Lind, and Paul M. Thomas. In conducting this study, we analyzed Medicare claims and provider data obtained from the Centers for Medicare & Medicaid Services (CMS). We interviewed officials at CMS, the Congressional Budget Office, and the Department of Health and Human Services Office of Inspector General. We also interviewed industry representatives from the American Hospital Association, College of American Pathologists, and National Rural Health Association, as well as representatives of individual hospitals and laboratories and a pathology practice management consulting company. Finally, we conducted a site visit of a laboratory and one of the rural hospitals to which it provides pathology services. As there is no list of covered hospitals and the laboratories to which they outsource technical pathology services, we used 2001 Medicare claims data, the most recent year for which data are available, for our analysis. We received the data files directly from CMS. These data reflect the set of claims submitted to and paid by CMS for services performed in 2001. We performed our own initial analyses to check the reliability of the data. We estimated the number of hospitals outsourcing technical pathology services to laboratories that directly billed Medicare and the volume of and payments for these services. To do so, we matched Medicare laboratory claims with claims submitted by prospective payment system (PPS) hospitals and critical access hospitals (CAH). We assumed that a laboratory’s service was related to a hospital inpatient admission or outpatient encounter if the date of service on the laboratory’s claim was (1) during an inpatient’s stay at a hospital, within 3 days prior to the inpatient’s admission, or after the inpatient’s discharge or (2) on the day of or within 3 days after an outpatient surgical procedure at a hospital. We included in our list of total hospitals only those hospitals listed in the CMS Provider of Services (POS) file and characterized outsourcing hospitals as urban or rural according to their designation in the POS file. To identify hospitals outsourcing technical pathology services that have converted to CAHs, we matched each hospital’s Medicare provider number to the list of CAHs maintained by the North Carolina Rural Health Research and Policy Analysis Center at the University of North Carolina as of March 2003. To estimate Medicare payments and beneficiary copayments to laboratories for technical pathology services in 2001, we first calculated the claims frequency for each type of technical pathology service in our file of matched laboratory and hospital claims. We estimated the Medicare payment amount for each type of technical pathology service as 80 percent of the Medicare physician fee schedule (MPFS) national standard payment rate for that service and beneficiary cost sharing as the remaining 20 percent, and then we multiplied the claims frequency by the estimated Medicare and beneficiary cost-sharing amounts to calculate total laboratory payments. We performed similar calculations to find payments for inpatient and outpatient claims exclusively. To estimate 2001 Medicare outpatient PPS payments and beneficiary cost sharing to PPS hospitals if laboratories had not received direct payments, we multiplied the 2001 outpatient PPS Medicare payment rate and beneficiary copayment amount for each type of technical pathology service by the frequency of each type of technical pathology service in the outpatient claims. To estimate the cost difference to PPS hospitals of paying laboratories to perform technical pathology services, we first calculated a weighted average payment rate for technical pathology services for 2001 by multiplying the 2001 national standard MPFS payment rate by the frequency percentage of each type of technical pathology service among PPS hospitals and summing the payments for all services. We multiplied the median and 95th percentile volume of services outsourced by PPS hospitals by the estimated weighted average laboratory payment. We then calculated a weighted outpatient PPS payment rate, including beneficiary copayments, for technical pathology services in 2001 as described above for calculating the weighted average MPFS payment rate. Because approximately 63 percent of technical pathology services provided to patients of PPS hospitals were provided to outpatients, we estimated the number of outpatient services by multiplying the median and 95th percentile volumes by 63 percent. We then multiplied the estimated number of outpatient services by the estimated weighted average outpatient PPS payment rate, and subtracted this amount from the weighted average laboratory payment. We interviewed representatives of four Medicare carriers and four state hospitals associations. In addition, we spoke with representatives from 19 hospitals and 13 laboratories from a sample of eight geographically diverse states—Colorado, Florida, Iowa, North Dakota, Pennsylvania, Tennessee, South Dakota, and Washington—and an additional 2 laboratories in Oklahoma. We selected several states in the South, Southeast, and Midwest where, according to CMS officials, outsourcing arrangements for technical pathology services were believed to be fairly common. We interviewed officials from urban and rural hospitals and hospitals and laboratories with different types of outsourcing arrangements, including a hospital that outsources only complex and infrequently performed services and a hospital that currently pays its laboratory for technical pathology services.
In 1999, the Health Care Financing Administration, now called the Centers for Medicare & Medicaid Services (CMS), proposed terminating an exception to a payment rule that had permitted laboratories to receive direct payment from Medicare when providing technical pathology services that had been outsourced by certain hospitals. The Congress enacted provisions in the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) to delay the termination. The BIPA provisions directed GAO to report on the number of outsourcing hospitals and their service volumes and the effect of the termination of direct laboratory payments on hospitals and laboratories, as well as on access to technical pathology services by Medicare beneficiaries. GAO analyzed Medicare inpatient and outpatient hospital and laboratory claims data from 2001 to develop its estimates. In 2001, approximately 95 percent of all Medicare prospective payment system (PPS) hospitals--hospitals that are paid predetermined fixed amounts for services--and critical access hospitals (CAH), which receive reimbursement from Medicare based on their reasonable costs, outsourced some technical pathology services to laboratories that received direct payment for those services. However, the median number of outsourced services per hospital was small--81. If laboratories had not received direct payments for services for hospital patients, GAO estimates that Medicare spending would have been $42 million less in 2001, and beneficiary cost sharing obligations for inpatient and outpatient services would have been reduced by $2 million. Most hospitals are unlikely to experience a financial burden from paying laboratories to provide technical pathology services. If payment to the laboratory is made at the current rate, a PPS hospital outsourcing the median number of technical pathology services outsourced by PPS hospitals, 94, would incur an additional annual cost of approximately $2,900. There would be no financial impact for the 31 percent of rural hospitals that are CAHs, as they would receive Medicare reimbursement for their additional costs. Medicare beneficiaries' access to pathology services would likely be unaffected if direct laboratory payments are terminated. Hospital officials stated they were unlikely to limit surgical services, including those requiring pathology services, because limiting these services would result in a loss of revenue and could restrict access to services for their communities.
Approximately 50,000 criminals are convicted in federal courts each year. Federal courts may impose one or more of the following four monetary penalties upon conviction. Fines: amounts the court sets as punishment. Fines may accumulate interest, and those that are not paid promptly may result in additional penalties. Restitution: amounts paid to identifiable crime victims that are entitled to compensation. Special assessments: fixed amounts, ranging from $5 to $200, assessed for each count upon which the defendant is convicted. The courts are required to impose special assessments on each offender convicted of a crime. Reimbursement of costs: an amount equal to the court and legal costs of the trial. Following enactment of numerous laws relating to the imposition and collection of monetary penalties during the 1980s, federal courts have imposed criminal fines and ordered monetary restitution in greater amounts. According to DOJ, an estimated $1.5 billion in monetary penalties were assessed against persons convicted of federal crimes in fiscal year 1994. DOJ also estimated that, as of the end of fiscal year 1994, about $4.5 billion in criminal fines, restitution, special assessments, and court costs were outstanding for persons convicted of federal crimes. Criminal debtors have been allowed to make payments directly to victims or to local offices of one of three different agencies within judicial districts—the Clerk of the Court, probation office, or U.S. Attorney’s Office (USAO). For example, criminal debtors may make payments based on the monetary penalties routinely imposed by federal courts as follows. Debtors may pay the courts directly. Debtors may pay the local USAO, which forwards receipts to the Clerk of the Court or deposits the receipts in DOJ lock boxes.Incarcerated debtors may pay their debts through the Inmate Financial Responsibility Program. Debtors may periodically pay their debts through probation officers, who are members of the federal court staff. Debtors may pay restitution directly to victims or through the courts or their USAO. This has created a fragmented process for tracking and collecting criminal debt and resulted in a lack of standardized procedures, discrepancies among agency collection records, and duplication of effort. The Criminal Fine Improvements Act of 1987 transferred responsibility for the processing of criminal fines and assessments from DOJ to AOUSC. The House Judiciary Committee’s report on the act set forth the expectation that AOUSC was to establish a centralized and highly automated system capable of receiving all criminal debtor payments; processing fines, restitution, forfeiture of bail bonds and collateral, and special assessments; and retrieving up-to-date information on the status of any obligation covered by the system for all 94 judicial districts. By using this centralized debt system and relieving DOJ of recordkeeping responsibilities, it was expected that additional DOJ resources would be available to perform debt collection enforcement and, therefore, collect more of the outstanding debt. The Executive Office for United States Attorneys (EOUSA), within DOJ, is responsible for providing direction and administrative support over debt collection activities. U.S. Attorneys’ staff within each judicial district are responsible for enforcing collection of delinquent or defaulted criminal debts. However, because implementation of the NFC system is still in the early stages, DOJ has continued to perform much of the recordkeeping as well as debt collection enforcement functions. After the system becomes fully operational in all districts, DOJ is expected to be relieved of the recordkeeping function but will retain responsibility for enforcing legal collection actions on delinquent and defaulted criminal debt. DOJ will be one of the primary users of the NFC system because the system will identify delinquent and defaulted accounts that require legal enforcement action. After receiving initial funding of $2.2 million in 1990, AOUSC initiated an NFC project in Raleigh, North Carolina, and began developing a prototype automated information system to centralize criminal debt for one judicial district. However, in an August 1993 report on the Raleigh Fine Center, we concluded that the project would not be operational by its 1995 target date due to difficulties in reconciling debtor accounts and training staff. We also raised concerns about the lack of computer systems security controls to prevent unauthorized access to sensitive information contained in the NFC database. As a result of our report and a program review performed subsequently by AOUSC, in October 1993, AOUSC discontinued development efforts in Raleigh. According to AOUSC officials, the project would have been too difficult and costly to expand to the remaining 93 judicial districts primarily because of the inadequately developed and documented computer software. The system continues to operate in the Raleigh location only. In April 1994, AOUSC began its current two-phased implementation approach that emphasized using an off-the-shelf accounting system which could be enhanced rather than developing a system totally in-house. Under phase I, AOUSC plans to install an off-the-shelf accounting system to support establishing debtor accounts, billing debtors, recording receipts, paying victims, and reporting on a limited scale for new criminal debts. Also, AOUSC officials plan to enhance the off-the-shelf system to automate certain processes, most of which are now performed manually. For example, they plan to automate the calculation of interest and penalties that are to be assessed on criminal fines. Phase I was also to include reconciling existing criminal debt accounts and entering agreed upon balances into the NFC system. By phase I’s scheduled completion in September 1996, AOUSC expects that all 94 judicial districts will be providing new criminal debt information to NFC. To minimize the impact on NFC and judicial district staffs caused by the need to manually perform some processing and billing functions, AOUSC decided to add low volume districts—those generally having fewer than 200 convicted criminals annually—first. Larger districts would be added later as planned system enhancements became operational. See appendix III for a table showing proposed time frames for adding judicial districts to NFC. Once the selected system is fully operational under phase I, AOUSC plans to expand the system during phase II to improve users’ access to NFC information and increase management information reporting capabilities. During phase II, which is projected to be completed within 3 to 5 years, AOUSC intends to either further enhance its selected off-the-shelf accounting system or perform a completely new procurement. AOUSC officials have been unable as yet to provide information, including cost estimates, on specifically how they will accomplish phase II. Our review focused primarily on AOUSC’s efforts to centralize criminal debt in its NFC system under its phase I implementation. We were unable to analyze the AOUSC’s phase II implementation due to the lack of specificity at this stage as to what AOUSC planned to accomplish. We reviewed available project and systems planning documentation for phase I, including systems and functional requirements, and discussed project and systems status with NFC, EOUSA, and judicial district officials. We also reviewed (1) training documents used to orient and instruct judicial district staff on the process for submitting data to NFC, (2) the NFC implementation schedule for adding judicial districts to the NFC system, and (3) the AOUSC’s draft reconciliation strategy. In addition, we observed data entry and processing of judicial district input data on the interim NFC system and reviewed monthly reports generated by the system. We also discussed with AOUSC officials their strategy for selecting an off-the-shelf accounting system for later use. Further, we reviewed documentation and discussed with Raleigh Fine Center personnel lessons learned from the prior systems development effort and observed operation of the system with Raleigh personnel. We reviewed current and planned NFC staffing needs and obtained NFC budgetary and expenditure data. We performed our review from August 1994 through January 1995 in accordance with generally accepted government auditing standards. A detailed description of our scope and methodology is in appendix II. The Director, AOUSC, and the Director, EOUSA, provided written comments on a draft of this report. These comments are presented in appendixes IV and V, respectively. Since April 1994, AOUSC has progressed in implementing phase I of its NFC approach. AOUSC officials have (1) established a process for centralizing and maintaining federal criminal debt accounts, (2) developed a formal training program for judicial district staff, (3) selected an off-the-shelf accounting system which became operational in April 1995, (4) installed an inexpensive interim software package that was used through March 1995 until the selected system became operational, (5) begun processing new criminal debt information for 25 of the smaller judicial districts, and (6) begun converting criminal debt information from the interim to the off-the-shelf system. AOUSC officials selected an off-the-shelf accounting system that could be modified to meet unique billing and disbursement needs. AOUSC officials told us that they selected this accounting system based on published external evaluations of commercially available microcomputer accounting software as well as AOUSC’s internal evaluations of several software packages. This decision was consistent with Office of Management and Budget (OMB) Circular A-130 guidance that encourages agencies to purchase off-the-shelf financial systems rather than develop their own. However, because the latest version of the selected system was still under development by the vendor, AOUSC bought and started using an inexpensive (about $200) interim accounting software package having limited capability and processing capacity so that implementation for some of the smaller districts could begin in August 1994. AOUSC officials told us that the interim software package would accommodate processing requirements for the 23 judicial districts that they expected to have implemented through March 1995. According to the AOUSC Director, the selected off-the-shelf accounting system, which was received in October 1994 and began operating in April 1995, is replacing the interim system and is to provide the capability for processing criminal debt information for all 94 judicial districts by September 1996. Efforts to convert criminal debt accounts from the interim system to the off-the-shelf system are to be completed by May 1995. AOUSC has also established a process for obtaining and updating debt information. Once implemented, Clerk of the Court staff in judicial districts are to manually complete standardized account establishment forms for newly convicted criminals, that include the monetary penalties imposed by the courts, along with (1) personal debtor data, such as address, date of birth, and social security number, (2) payment requirements, and (3) information on any restitution payments to be made to those harmed by the crime. In addition, judicial district staffs in the Clerk of the Court’s office, probation office, and U.S. Attorney’s Office also are to submit manually prepared account maintenance forms to NFC when information about the debtor or the account changes, such as a change of address. Under this process, once NFC staff receive completed forms, they are to enter data into the automated accounting system. The NFC accounting system generates monthly billings to debtors, who, in turn, are to remit payments to NFC. NFC staff manually determine, based on payments received from debtors and the terms set forth in the sentencing documentation, any applicable amounts that are to be paid to crime victims. The NFC system also generates monthly statements to the judicial districts showing individual debtor account balances. AOUSC has developed and begun conducting a formal training program at judicial districts aimed at defining the roles and responsibilities of staff in the Clerk of the Court’s office, the probation office, and U.S. Attorney’s Office who compile and submit criminal debt information to NFC. The primary goals of the training are to (1) promote effective communication among the entities involved in criminal debt activities and (2) ensure that complete and accurate standardized data are provided to NFC. This training is provided to judicial district personnel prior to the district sending criminal debt information to the NFC system. According to AOUSC’s Director, as of April 1995, NFC implementation in the judicial districts was slightly ahead of schedule. On August 1, 1994, the NFC began receiving criminal debt information on new accounts from two judicial districts—Vermont and Maine. As of December 1994, 13 judicial districts were sending information on new criminal debt accounts to NFC. The NFC was expanded to 25 judicial districts as of April 1995, which was 2 more than the 23 AOUSC had initially planned in April 1994. Our analysis of the criminal debt data received by NFC for the initial 13 districts showed that these new accounts amounted to about $6.4 million, which represented about 6 percent of the 13 districts’ criminal cases with outstanding balances, and about one-tenth of one percent of DOJ’s estimated $4.5 billion in total debt. We did not perform similar analysis for the 12 additional districts recently brought onto NFC. Extensive planning and implementation effort remains if AOUSC is to fully implement a system to account for and report on criminal debt for all 94 judicial districts. Currently, only a small fraction of criminal debt accounts are on the NFC system. Before the larger courts are added, software enhancements will be needed so that NFC can effectively support certain billing, payment receipt, and disbursement functions, most of which NFC staff now do manually. In addition, some of the larger challenges facing AOUSC and DOJ involve (1) ensuring that data for existing debtor accounts are reliable before entering the information into the NFC system and (2) developing a methodology for determining, tracking, and reporting the collectibility of new and existing debt. Further, while AOUSC has indicated that it will enhance the system to increase user access to NFC information and improve management reporting during phase II, it has not yet determined what specific enhancements will be made to NFC or how it intends to accomplish them. During phase I, in addition to replacing the interim software package with the selected off-the-shelf system, AOUSC plans to develop several software enhancements to automate certain functions, most of which are now performed manually by NFC staff. For example, one enhancement will automate the calculation of interest and penalties. While the selected off-the-shelf system—an accounts receivable/accounts payable package—has standard capabilities, legislation that calls for calculation of interest and default penalties for fines requires additional capabilities. Specifically, there are several statutes that specify interest rates and penalties that are to be calculated for fines. The applicable statute is determined by the date of the debtor’s offense. Automating such interest and penalty calculations would save staff time and eliminate errors inherent in manual calculations. Another enhancement involves developing an automated interface to enable judicial districts to provide account establishment and maintenance data to NFC in automated formats. This improvement would reduce manual data entry tasks now performed by NFC staff and the corresponding risk of errors. Other enhancements include (1) automating transaction data from debtor payments sent to lock boxes and payments made through the Bureau of Prisons Inmate Financial Responsibility Program and (2) establishing an interface between NFC and a DOJ system to allow DOJ staff increased access to account information. While AOUSC has begun entering new account information, a major challenge will be reconciling existing criminal debt accounts and entering the resulting amounts into the NFC system. NFC will not contain complete criminal debt information for its users and the Congress until it also includes complete and reliable data on the estimated $4.5 billion in existing criminal debt. Separate records maintained by judicial district staffs within the Clerk of the Court’s office, probation office, and U.S. Attorney’s Office will need to be reconciled. Within a judicial district, each of these entities is responsible for receiving payments and maintaining accounting records, and they may not balance or update their records at the same time. Another reason that accounts may not agree is that the U.S. Attorney’s Office is generally the only entity within the judicial districts that calculates and applies interest and penalties on accounts. Accordingly, AOUSC must coordinate with these entities to develop and implement a reconciliation approach to identify differences among the accounts, determine reasons for the differences, and ensure that agreed-upon corrections are made. AOUSC officials recognize the difficulties of reconciling existing debt records, and the magnitude of effort that will be needed to reconcile about 100,000 debtor accounts, nationwide. During the Raleigh National Fine Center project, judicial district staff took over a year to reconcile about 2,500 accounts. In our August 1993 report, we stated that the reconciliation, eventually completed in 1992, took longer than expected because judicial district and U.S. Attorney staffs had difficulty agreeing on account balances. This difficulty occurred despite the use of procedures directing that (1) account balances differing by $500 or less be considered reconciled because further reconciliation would not be cost-effective and (2) differences remaining would be decided in favor of the debtor. A separate but related problem that occurred during the Raleigh reconciliation effort is likely to resurface during AOUSC’s future account reconciliation attempts. In Raleigh, although the team performing the reconciliation eventually reconciled all account balances under the procedures previously described, it found that many accounts lacked current addresses and social security numbers. Such information is critical since NFC cannot bill debtors without correct addresses. AOUSC has begun preparing a strategy to guide the upcoming reconciliation process. This draft strategy, in part based on the reconciliation performed in Raleigh, identifies the roles and responsibilities of staff who are to perform the reconciliation and sets forth steps in the reconciliation process. However, the strategy has not yet been accepted by AOUSC, DOJ, and judicial district officials. AOUSC has not established time frames for beginning and completing reconciliations in each of the remaining 93 districts or estimated the resources needed to perform the process. In addition, the reconciliation strategy does not set forth steps to be followed if addresses and/or social security numbers are missing from debtor account information. While not specifically addressed by the Criminal Fine Improvements Act of 1987, a critical area that AOUSC and DOJ have not yet addressed is that of determining the collectibility of criminal debt accounts. Currently, AOUSC records all new criminal debt in its NFC system as accounts receivable without a determination by AOUSC or DOJ as to whether it appears collectible. Also, AOUSC officials have not established within the NFC system an allowance for doubtful or uncollectible receivables to properly account for and report on those receivables determined to have a low probability of collection. Without such allowances, decisionmakers, including the Congress, may be led to believe that substantially greater amounts are collectible. Also, AOUSC officials recognize that, during the reconciliation process, there is a need to identify accounts that are no longer legally enforceable and have included procedures in AOUSC’s draft reconciliation strategy to address these instances. Accounts that are no longer enforceable include special assessment cases when the 5-year statute of limitations on collecting the debt has expired or when a debtor has died. These accounts would not be transferred to NFC. AOUSC will need to work with DOJ to ensure that realistic determinations of collectibility are made on new criminal debts as the accounts are established, based on available information, and on those that have become delinquent or are in default. In addition, there is a need to assess existing debt accounts that are legally enforceable, but may be uncollectible. Although AOUSC has no authority to make adjustments to accounts or to write off debts, it could categorize certain debts as uncollectible for accounting purposes. We have worked with various agencies to develop methodologies for determining collectibility and, in some instances, have assisted them in applying the methodologies to their accounts receivable balances. Reliably estimating an allowance for uncollectible receivables requires consideration of both historical collection experience and current economic conditions. Also, a December 1992 standard recommended by the Federal Accounting Standards Advisory Board, and approved for use for fiscal years ending September 30, 1994, and thereafter, states that such an analysis should be performed on groups of accounts with similar collection risk characteristics and should include an evaluation of individual accounts to determine a debtor’s current ability to pay. Court officials told us that sufficient information is often available at the time of sentencing to determine the offender’s ability to pay and, therefore, whether or not a monetary penalty imposed is likely to be collected. For example, in the 1993 bombing of the World Trade Center in New York City, each of the four defendants received 240 years in prison and $250,000 in fines. Although $1 million of outstanding fines resulted from this case, DOJ and AOUSC officials believe the probability of collecting these fines is low. Currently, however, the DOJ records these debts as being fully collectible. It will be important for AOUSC to work cooperatively with DOJ to develop a methodology for determining collectibility. By not distinguishing between collectible and uncollectible criminal debt accounts, NFC will be unable to accurately report on the composition of the outstanding criminal debt balance. Similarly, users who are responsible for collecting debts will not have the ability to effectively target collection resources and realistically assess their performance unless information is available to identify which debts possess the best likelihood of collection, and thus, should be rigorously pursued. AOUSC recognizes that the NFC system established during phase I will have to evolve to a more sophisticated financial information system during phase II so that it can be used to improve management of criminal debt collection activities. AOUSC officials told us that they have begun working with DOJ and other system users to define the necessary information and reporting requirements. They plan to address this more fully during phase II and, at the time of our review, had not yet determined what additional enhancements and capabilities will be needed to complete this phase. According to AOUSC officials, at the end of phase II, the NFC system is to provide a repository of national statistical information on criminal debt produce reports to accommodate management information needs of the Congress, the judiciary, the executive branch, and other entities; provide the Clerk of the Court offices, probation offices, U.S. Attorneys’ Offices, and the Bureau of Prisons with easy access to account information so that the maximum level of debt collection can be achieved; and provide a means to account for the collection of bail bond and collateral forfeiture actions, as required by the Criminal Fine Improvements Act of 1987. AOUSC officials believe that a successful implementation of phase II will enable them to fully meet the requirements of the act. The two-phased NFC implementation approach chosen by AOUSC has enabled AOUSC to begin centralizing new criminal debt information in 25 of the smaller judicial districts. AOUSC has also purchased an off-the-shelf accounting system that, together with planned enhancements, should enable it to expand the NFC implementation to additional districts and more effectively accommodate information from the larger districts. The timely completion of these enhancements is important to the future efficient operation of NFC. AOUSC and DOJ also face significant challenges that will require extensive planning, as well as coordination among other NFC system users, if AOUSC is to successfully implement the required system. These challenges include reconciling the estimated $4.5 billion in existing debt and developing a methodology for determining the collectibility of new and existing debtor accounts. It is also critical that AOUSC determines specifically how it intends to proceed beyond phase I to increase user access to NFC account information and improve management reporting. AOUSC will need to address these issues to fully centralize criminal debt for all 94 judicial districts and improve the government’s ability to collect what is owed. As part of its efforts to complete the planned implementation of the NFC system, including enhancements such as the one needed to perform interest and penalty calculations, and various interfaces to facilitate the exchange of information between NFC and its users, we recommend that the Director, AOUSC, work with DOJ to finalize a reconciliation strategy to include time frames and resources for reconciling existing criminal debt accounts at judicial districts and entering the reconciled information into the NFC system and fully define a strategy for addressing additional actions needed to enable the NFC system to (1) provide a repository for national criminal debt statistical information, (2) produce reports to accommodate management information needs, (3) facilitate communication between NFC and its users, and (4) account for bail bond and collateral forfeiture actions. We also recommend that the Director, AOUSC, and the Director of DOJ’s EOUSA, work together to develop and implement a methodology for determining the collectibility of all criminal debt. In written comments on a draft of our report, AOUSC’s Director generally agreed with our findings and recommendations. However, the Director said that DOJ, by virtue of its criminal debt collection responsibilities, should assume the lead role for reconciling existing criminal debt accounts and for determining the collectibility of all criminal debt. Accordingly, the Director stated that these recommendations should be directed to DOJ, not AOUSC. Based on our discussions with both entities, DOJ and U.S. Attorneys are the primary parties who will ultimately determine whether or not criminal debt accounts are collectible. However, AOUSC also needs to ensure that the NFC system contains reliable information for reporting on the composition of the outstanding criminal debt balance. Therefore, we have revised our recommendation regarding developing a methodology for determining collectibility of criminal debt to place this responsibility with both AOUSC and DOJ. While we agree that DOJ and the U.S. Attorneys have a major role in reconciling existing criminal debt account balances, it is AOUSC’s responsibility to ensure that the NFC system contains complete and reliable data on all criminal debt, including existing unreconciled accounts. As stated in our report, AOUSC had already prepared a draft reconciliation strategy and had provided it to DOJ for coordination. We have modified our recommendation regarding reconciling existing accounts to state that AOUSC should work with DOJ to finalize a reconciliation strategy. In addition, the AOUSC Director stated that the draft report did not give AOUSC sufficient credit for the progress in implementing the NFC system. We updated the report to reflect progress through April 1995, but wish to point out that only a small percentage of new criminal debtor accounts are on the system. As noted in the report, we believe that AOUSC and DOJ still face significant challenges in their joint effort to reconcile existing debtor accounts and to report collectible amounts. The AOUSC Director also provided a number of suggested clarifications that we have incorporated as appropriate. The AOUSC Director’s comments and our response are included in appendix IV. DOJ’s Office of the Director, EOUSA, commented that the report should have placed more emphasis on the NFC’s lack of procedures, policies, and plans for developing the system. Our report describes the status of the NFC system implementation effort and indicates throughout the report that extensive planning and coordination will be needed to fully implement the NFC system. Nonetheless, progress is being made to enhance the reliability of accounting and reporting of criminal debt information. During our review, AOUSC officials recognized that additional planning would be required to ensure the NFC system would meet the needs of its users, and said that they were in the process of preparing formal plans. DOJ’s continued involvement in the NFC project is critical to ensuring that the NFC system is developed to meet the needs of all stakeholders, especially in light of DOJ’s responsibilities for enforcing the collection of criminal debt. Also, the Director, EOUSA, stated that reconciliation of existing criminal debt accounts does not represent a major challenge to the NFC as we had reported. There are several factors, in addition to the experiences of the Raleigh reconciliation, which we used to conclude that the reconciliation effort will be a major challenge. First, at the time of our review, AOUSC had not finalized a reconciliation strategy for use by judicial district staff, of which the U.S. Attorney’s Offices are a part, including establishing time frames for performing the reconciliations and estimating the resources needed to perform the process. Second, AOUSC officials and some judicial district officials—representing small districts that were already providing new criminal debt information to NFC—stated that reconciliation of existing accounts would be a time-consuming and large undertaking. Third, discussions with the former Associate Director, Financial Litigation Unit, EOUSA, indicated that special assessments alone, which are applied to each count of a conviction and represent about 46 percent of criminal debts, would create a large amount of work during the reconciliation process. The comments from the Director, EOUSA, and our response are included in appendix V. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Director, Administrative Office of the United States Courts; the Director, Executive Office for United States Attorneys; the Attorney General; and interested congressional committees. Copies will be made available to others upon request. Please call me at (202) 512-7487 if you have any questions concerning this report. Other major contributors to this report are listed in appendix VI. AOUSC has received a total of $19 million from the Crime Victims Fund to support NFC development and operations since its inception through fiscal year 1994. An additional $6.2 million is to be made available to AOUSC in fiscal year 1995. Initially, AOUSC was to receive $2.2 million annually after the Fund had exceeded collections of $125 million through fiscal year 1990 and $150 million thereafter. However, because Crime Victims Fund deposits did not exceed these levels in some years, AOUSC was not provided funding for NFC. In 1992, legislation was enacted to ensure more funding stability for NFC, and $6.2 million was to be provided annually for fiscal years 1992 through 1995. Thereafter, the Fund was to receive $3 million annually. As of September 30, 1994, about $6.7 million is reported to have been expended or obligated since the inception of the project. AOUSC officials estimated that an additional $4.2 million will be expended or obligated for fiscal year 1995. To date, the majority of the project’s total reported costs have been for personnel compensation and benefits, travel, and automation related expenditures. Table I.1 summarizes NFC’s reported funding activity. NFC staffing is being accomplished in several ways—transferring staff from the Raleigh Fine Center project, reassigning individuals within AOUSC, hiring new employees, and detailing staff from various components of the federal judiciary and the Department of Justice. According to NFC project management officials, long-term staffing requirements have not been determined. As of November 1994, 23 full-time staff were assigned to the NFC project. At that time, AOUSC officials had proposed that another 11 staff would be added by the end of March 1995 to provide increased support for accounting related activities and systems analyses. Table I.2 summarizes current and proposed NFC staff by position description. Our review focused primarily on AOUSC’s efforts to centralize criminal debt on its NFC system under phase I implementation. We were unable to analyze AOUSC’s phase II implementation due to the lack of specificity at this stage as to what AOUSC planned to accomplish. To obtain information on the status of the NFC’s systems implementation, we reviewed available project and systems planning documentation. We also discussed project and systems status with cognizant NFC management and staff, EOUSA officials, and various officials within the Clerk of the Court’s office, probation office, and U.S. Attorney’s Office in several judicial districts that are currently providing criminal debt information to the NFC system—the District of Vermont, District of Maine, and Western District of Wisconsin. To determine what percentage of the 13 judicial districts’ outstanding criminal debt caseload was on the NFC accounting system, we obtained criminal debt case information from U.S. Attorney’s Offices for those districts and compared it to NFC caseload information. To determine how the current system development approach differed from previous efforts, we reviewed available documentation, discussed prior system development efforts with judicial district personnel, and observed the operations of the prior Raleigh Fine Center system. Since the NFC began processing new criminal debt cases in August 1994, we observed data entry and processing of judicial district standard input data on the NFC accounting system and reviewed monthly reports generated by the system. We reviewed an NFC functional requirements document and discussed system needs with DOJ and EOUSA officials, and various judicial district staffs. We discussed with AOUSC officials their strategy for selecting an off-the-shelf system for later use. We also reviewed an August 1994 draft of the AOUSC’s reconciliation strategy. In addition, we reviewed training documents used to orient and instruct judicial district personnel on the process for submitting data to NFC and reviewed the NFC implementation schedule for adding judicial districts to the NFC system. We also obtained information on current and planned NFC staffing levels, obtained budgetary and expenditure data for fiscal years 1990-1994, and estimated funding activity for fiscal year 1995. We performed our review from August 1994 through January 1995 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the letter from the Director, Administrative Office of the United States Courts, dated April 16, 1995. 1. This issue is discussed in the “Agency Comments and Our Evaluation” section of the report. 2. See comment 1. 3. We have modified the Results in Brief and other sections of the report to reflect that 25 districts are providing new criminal debt account information for input into the NFC system and that the NFC implementation is slightly ahead of schedule. 4. The report has been modified to reflect the recent additional progress made by AOUSC. We have further updated the report to indicate that AOUSC (1) has begun converting criminal debt accounts from its interim accounting software package to the new off-the-shelf accounting system and (2) is planning to complete conversion efforts for all 25 districts by May 1995. 5. AOUSC is correct in its characterization of our report as emphasizing that much more needs to be done to completely implement the NFC system. We have modified the report to reflect that, as of April 1995, AOUSC was slightly ahead of schedule in implementing the NFC system. 6. We believe the discussions of AOUSC’s overall strategy and policies for the phase I NFC implementation effort are appropriate. 7. We have modified the Results in Brief section to reflect that AOUSC and DOJ should work together to determine the collectibility of criminal debt. We have further clarified that DOJ is responsible for enforcing the collection of delinquent and defaulted criminal debts. 8. See comment 1. 9. See comment 1. The following are GAO’s comments on the letter from the Director of DOJ’s Executive Office for United States Attorneys dated April 19, 1995. 1. This issue is discussed in the “Agency Comments and Our Evaluation” section of the report. 2. The available project and systems planning documentation we reviewed consisted of (1) a draft project plan, (2) the February 1994 functional requirements document, (3) training documentation presented during training sessions, (4) the judicial district implementation schedule included in our report as appendix III, and (5) a draft reconciliation strategy. AOUSC had coordinated these documents with DOJ. 3. At the time of our review, we were informed by both the NFC Project Manager and the former Associate Director, Financial Litigation Staff, EOUSA, that a decision had not been made as to how DOJ would be provided access to NFC data. Our report points out the need for AOUSC to complete system enhancements to automate certain functions, including establishing an interface between NFC and DOJ. Also, during our review, AOUSC was in the process of developing automated account establishment and maintenance forms to transfer criminal debt data from judicial districts to NFC. The fact that DOJ staff are working closely with AOUSC staff to implement the NFC system should help ensure that DOJ’s user needs are being met. 4. See comment 1. 5. Although NFC could enter reconciled amounts into the system without addresses and social security numbers, our report points out that the lack of this information was a problem during the Raleigh reconciliation. This information would be needed before collection of criminal debt could take place in other districts. 6. We have revised the report to clarify that AOUSC’s draft reconciliation strategy requires judicial district staff only to identify those existing debt accounts that are no longer legally enforceable because, for example, the defendant’s obligation to pay has expired or the defendant has died. These cases would not be transferred to NFC. The methodology for determining collectibility that we have recommended AOUSC and DOJ develop should address (1) steps to determine the collectibility of legally enforceable existing debt and (2) guidance as to when these accounts should be entered into the NFC system. 7. We have revised the report. 8. We agree that uncollectible debts may still be legally enforceable and as such, should be transferred to NFC. As stated in our report, uncollectible accounts should not be considered as valid accounts receivable for financial reporting purposes, however; AOUSC needs to account for these receivables for enforcement, compliance, and tracking purposes. Jan B. Montgomery, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the National Fine Center (NFC), focusing on the: (1) Administrative Office of the United States Courts' (AOUSC) efforts to establish NFC and centralize criminal debt accounting and reporting within NFC; and (2) additional actions AOUSC needs to take to complete implementation of the NFC automated accounting system. GAO found that AOUSC has: (1) established a process for centralizing and maintaining federal criminal debt accounts; (2) developed a formal training program for judicial district staff; (3) selected an off-the-shelf accounting system and inexpensive interim software until the selected system becomes operational; (4) begun processing new criminal debt information for 25 small judicial districts; and (5) begun converting criminal debt information from the interim system to the off-the-shelf system. In addition, GAO found that: (1) AOUSC is slightly ahead of schedule in establishing NFC and centralizing federal criminal debt, but much planning and implementation effort remains; (2) many enhancements to the selected accounting system will be required to automate certain manual processes before NFC can begin receiving new criminal debt information from larger judicial districts; (3) AOUSC and the Department of Justice (DOJ) still must ensure the reliability of the existing criminal debt accounting data before entering the information into the NFC system; and (4) although AOUSC plans to further enhance the system to increase user access to NFC information within the next 5 years, it must first identify the specific enhancements required and determine how to accomplish them.
Interior lacks adequate assurance that it is receiving the full royalties it is owed because (1) neither BLM nor OMM is fully inspecting leases and meters as required by law and agency policies, and (2) MMS lacks adequate management systems and sufficient internal controls for verifying that royalty payment data are accurate and complete. With regard to inspecting oil and gas production, BLM is charged with inspecting approximately 20,000 producing onshore leases annually to ensure that oil and gas volumes are accurately measured. However, BLM’s state Inspection and Enforcement Coordinators from Colorado, Montana, New Mexico, Utah, and Wyoming told us that only 8 of the 23 field offices in the 5 states completed both their (1) required annual inspections of wells and leases that are high-producing and those that have a history of violations and (2) inspections every third year on all remaining leases. According to the BLM state Inspection and Enforcement Coordinators, the number of completed production inspections varied greatly by field office. For example, while BLM inspectors were able to complete all of the production inspections in the Kemmerer, Wyoming, field office, inspectors in the Glenwood Springs, Colorado, field office were able to complete only about one-quarter of the required inspections. Officials in 3 of the 5 field offices in which we held detailed discussions with inspection staff told us that they had not been able to complete the production inspections because of competing priorities, including their focus on completing a growing number of drilling inspections for new oil and gas wells, and high inspection staff turnover. However, BLM officials from all 5 field offices told us that when they have conducted production inspections they have identified a number of violations. For example, BLM staff in 4 of the 5 field offices identified errors in the amounts of oil and gas production volumes reported by operators to MMS by comparing production reports with third-party source documents. Additionally, BLM staff from 1 field office we visited showed us a bypass built around a gas meter, allowing gas to flow around the meter without being measured. BLM staff ordered the company to remove the bypass. Staff from another field office told us of a case in which individuals illegally tapped into a gas line and routed gas to private residences. Finally, in one of the field offices we visited, BLM officials told us of an instance in which a company maintained two sets of conflicting production data—one used by the company and another reported to MMS. Moreover, OMM, which is responsible for inspecting offshore production facilities that include oil and gas meters, did not inspect all oil and gas royalty meters, as required by its policy, in 2007. For example, OMM officials responsible for meter inspections in the Gulf of Mexico told us that they completed about half of the required 2,700 inspections, but that they met OMM’s goal for witnessing oil and gas meter calibrations. OMM officials told us that one reason they were unable to complete all the meter inspections was their focus on the remaining cleanup work from hurricanes Katrina and Rita. Meter inspections are an important aspect of the offshore production verification process because, according to officials, one of the most common violations identified during inspections is missing or broken meter seals. Meter seals are meant to prevent tampering with measurement equipment. When seals are missing or broken, it is not possible without closer inspection to determine whether the meter is correctly measuring oil or gas production. With regard to MMS’s assurance that royalty data are being accurately reported by companies, MMS’s systems and processes for collecting and verifying these data lack both capabilities and key internal controls, including those focused on data accuracy, integrity, and completeness. For example, MMS lacks an automated process to routinely and systematically reconcile all production data filed by payors (those responsible for paying the royalties) with production data filed by operators (those responsible for reporting production volumes). MMS officials told us that before they transitioned to the current financial management system in 2001, their system included an automated process that reconciled the production and royalty data on all transactions within approximately 6 months of the initial entry date. However, MMS’s new system does not have that capability. As a result, such comparisons are not performed on all properties. Comparisons are made, if at all, 3 years or more after the initial entry date by the MMS compliance group for those properties selected for a compliance review or audit. In addition, MMS lacks a process to routinely and systematically reconcile all production data included by payors on their royalty reports or by operators on their production reports with production data available from third-party sources. OMM does compare a large part of the offshore operator-reported production data with third-party data from pipeline operators through both its oil and gas verification programs, but BLM compares only a relatively small percentage of reported onshore oil and gas production data with third-party pipeline data. When BLM and OMM do make comparisons and find discrepancies, they forward the information to MMS, which then takes steps to reconcile and correct these discrepancies by talking to operators. However, even when discrepancies are corrected and the operator-reported data and pipeline data have been reconciled, these newly reconciled data are not automatically and systematically compared with the reported sales volume in the royalty report, previously entered into the financial management database, to ensure the accuracy of the royalty payment. Such comparisons occur only if a royalty payor’s property has been selected for an audit or compliance review. Furthermore, MMS’s financial management system lacks internal controls over the integrity and accuracy of production and royalty-in-value data entered by companies. Companies may legally make changes to both royalty and production data in MMS’s financial management system for up to 6 years after the reporting month, and these changes may necessitate changes in the royalty payment. However, when companies retroactively change the data they previously entered, these changes do not require prior approval by, or notification of, MMS. As a result of the companies’ ability to unilaterally make these retroactive changes, the production data and required royalty payments can change over time, further complicating efforts by agency officials to reconcile production data and ensure that the proper amount of royalties was paid. Compounding this data reliability concern, changes made to the data do not necessarily trigger a review to determine their reasonableness or whether additional royalties are due. According to agency officials, these changes are not subject to review at the time a change is made and would be evaluated only if selected for an audit or compliance review. This is also problematic because companies may change production and royalty data after an audit or compliance review has been done, making it unclear whether these audited royalty payments remain accurate after they have been reviewed. Further, MMS officials recently examined data from September 2002 through July 2007 and identified over 81,000 adjustments made to data outside the allowable 6-year time frame. MMS is working to modify the system to automatically identify adjustments that have been made to data outside of the allowable 6-year time frame, but this effort does not address the need to identify adjustments made within the allowable time that might necessitate further adjustments to production data and royalty payments due. Finally, MMS’s financial management system could not reliably detect when production data reports were missing until late 2004, and the system continues to lack the ability to automatically detect missing royalty reports. In 2004, MMS modified its financial management system to automatically detect missing production reports. As a result, MMS has identified a backlog of approximately 300,000 missing production reports that must be investigated and resolved. It is important that MMS have a complete set of accurate production reports so that BLM can prioritize production inspections, and its compliance group can easily reconcile royalty payments with production information. Importantly, MMS’s financial management system continues to lack the ability to automatically detect cases in which an expected royalty report has not been filed. While not filing a royalty report may be justifiable under certain circumstances, such as when a company sells its lease, MMS’s inability to detect missing royalty reports presents the risk that MMS will not identify instances in which it is owed royalties that are simply not being paid. Officials told us they are currently able to identify missing royalty reports in instances when they have no royalty report to match with funds deposited to Treasury. However, cases in which a company stops filing royalty reports and stops paying royalties would not be detected unless the payor or lease was selected for an audit or compliance review. MMS’s increasing use of compliance reviews, which are more limited in scope than audits, has led to an inconsistent use of third-party data to verify that self-reported royalty data are correct, thereby placing accurate royalty collections at risk. Since 2001, MMS has increasingly used compliance reviews to achieve its performance goals of completing compliance activities—either full audits or compliance reviews—on a predetermined percentage of royalty payments. According to MMS, compliance reviews can be conducted much more quickly and require fewer resources than audits, largely because they represent a quicker, more limited reasonableness check of the accuracy and completeness of a company’s self-reported data, and do not include a systematic examination of underlying source documentation. Audits, on the other hand, are more time- and resource-intensive, and they include the review of original source documents, such as sales revenue data, transportation and gas processing costs, and production volumes, to verify whether company- reported data are accurate and complete. When third-party data are readily available from OMM, MMS may use them when conducting a compliance review. For example, MMS may use available third-party data on oil and gas production volumes collected by OMM in its compliance reviews for offshore properties. In contrast, because BLM collects only a limited amount of third-party data for onshore production, and MMS does not request these data from the companies, MMS does not systematically use third-party data when conducting onshore compliance reviews. Despite conducting thousands of compliance reviews since 2001, MMS has only recently evaluated their effectiveness. For calendar year 2002, MMS compared the results of 100 of about 700 compliance reviews of offshore leases and companies with the results of audits conducted on those same leases or companies. However, while the compliance reviews covered, among other things, 12 months of production volumes on all products— oil, gas, and retrograde, a liquid product that condenses out of gas under certain conditions—the audits covered only 1 month and one product. As a result of this evaluation comparing the results of compliance reviews with those of audits, MMS now plans to improve its compliance review process by, for example, ensuring that it includes a step to check that royalties are paid on all royalty-bearing products, including retrograde. To achieve its annual performance goals, MMS began using the compliance reviews along with audits. One of MMS’s performance goals is to complete compliance activities—either audits or compliance reviews— on a specified percentage of royalty payments within 3 years of the initial royalty payment. For example, in 2006 MMS reported that it had achieved this goal by confirming reasonable compliance on 72.5 percent of all calendar year 2003 royalties. To help meet this goal, MMS continues to rely heavily on compliance reviews, yet it is unable to state the extent to which this performance goal is accomplished through audits as opposed to compliance reviews. As a result, MMS does not have information available to determine the percentage of the goal that was achieved using third- party data and the percentage that did not systematically rely on third- party data. Moreover, to help meet its performance goal, MMS has historically conducted compliance reviews or audits on leases and companies that have generated the most royalties, with the result that the same leases and companies are reviewed year after year. Accordingly, many leases and companies have gone for years without ever having been reviewed or audited. In 2006, Interior’s Inspector General (IG) reviewed MMS’s compliance process and made a number of recommendations aimed at strengthening it. The IG recommended, among other things, that MMS examine 1 month of third-party source documentation as part of each compliance review to provide greater assurance that both the production and allowance data are accurate. The IG also recommended that MMS track the percentage of the annual performance goal that was accomplished through audits versus through compliance reviews, and that MMS move toward a risk-based compliance program and away from reviewing or auditing the same leases and companies each year. To address the IG’s recommendations, MMS has recently revised its compliance review guidance to include suggested steps for reviewing third-party source production data when available for both offshore and onshore oil and gas, though the guidance falls short of making these steps a requirement. MMS has also agreed to start tracking compliance activity data in 2007 that will allow it to report the percentage of the performance goal that was achieved through audits versus through compliance reviews. Finally, MMS has initiated a risk-based compliance pilot project, whereby leases and companies are selected for compliance work according to MMS-defined risk criteria that include factors other than whether the leases or companies generate high royalty payments. According to MMS, during fiscal year 2008 it will further evaluate and refine the pilot as it moves toward fuller implementation. Finally, representatives from the states and tribes who are responsible for conducting compliance work under agreements with MMS have expressed concerns about the quality of self-reported production and royalty data they use in their reviews. As part our work, we sent questionnaires to all 11 states and seven tribes that conducted compliance work for MMS in fiscal year 2007. Of the nine state and five tribal representatives who responded, seven reported that they lack confidence in the accuracy of the royalty data. For example, several representatives reported that because of concerns with MMS’s production and royalty data, they routinely look to other sources of corroborating data, such as production data from state oil and gas agencies and tax agencies. Finally, several respondents noted that companies frequently report production volumes to the wrong leases and that they must then devote their limited resources to correcting these reporting problems before beginning their compliance reviews and audits. Because MMS’s royalty-in-kind program does not extend the same production verification processes used by its oil program to its gas program, it does not have adequate assurance that it is collecting the gas royalties it is owed. As noted, under the royalty-in-kind program, MMS collects royalties in the form of oil and gas and then sells these commodities in competitive sales. To ensure that the government obtains the fair value of these sales, MMS must make sure that it receives the volumes to which it is entitled. Because prices of these commodities fluctuate over time, it is also important that MMS receive the oil and gas at the time it is entitled to them. As part of its royalty-in-kind oversight effort, MMS identifies imbalances between the volume operators owe the federal government in royalties and the volume delivered and resolves these imbalances by adjusting future delivery requirements or cash payments. The methods that MMS uses to identify these imbalances differ for oil and gas. For oil, MMS obtains pipeline meter data from OMM’s liquid verification system, which records oil volumes flowing through numerous metering points in the Gulf of Mexico region. MMS calculates its royalty share of oil by multiplying the total production volumes provided in these pipeline statements by the royalty rates for a given lease. MMS compares this calculation with the volume of royalty oil that the operators delivered as reported by pipeline operators. When the value of an imbalance cumulatively reaches $100,000, MMS conducts further research to resolve the discrepancy. Using pipeline statements to verify production volumes is a good check against companies’ self-reporting of royalties due the federal government because companies have an incentive to not underreport their share of oil going into the pipeline because that is the amount they will have to sell at the other end of the pipeline. For gas, MMS relies on information contained in two operator-provided documents—monthly imbalance statements and production reports. Imbalance statements include the operator’s total gas production for the month, the share of that production that the government is entitled to, and any differences between what the operator delivered and the government’s royalty share. Production reports contain a large number of data elements, including production volumes for each gas well. MMS compares the production volumes contained in the imbalance statements with those in the production reports to verify production levels. MMS then calculates its royalty share based on these production figures and compares its royalty share with gas volumes the operators delivered as reported by pipeline operators. When the value of an imbalance cumulatively reaches $100,000, MMS conducts further research to resolve the discrepancy. MMS’s ability to detect gas imbalances is weaker than for oil because it does not use third-party metering data to verify the operator-reported production numbers. Since 2004, OMM has collected data from gas pipeline companies through its gas verification system, which is similar to its liquid verification system in that the system records information from pipeline company-provided source documents. Our review of data from this program shows that these data could be a useful tool in verifying offshore gas production volumes. Specifically, our analysis of these pipeline data showed that for the months of January 2004, May 2005, July 2005, and June 2006, 25 percent of the pipeline metering points had an outstanding discrepancy between self-reported and pipeline data. These discrepancies are both positive and negative—that is, production volumes submitted to MMS by operators are at times either under- or overreported. Data from the gas verification system could be useful in validating production volumes and reducing discrepancies. However, to fully benefit from this opportunity, MMS needs to improve the timeliness and reliability of these data. After examining this issue, in December 2007, the Subcommittee on Royalty Management, a panel appointed by the Secretary of the Interior to examine MMS’s royalty program, reported that OMM is not adequately staffed to conduct sufficient review of data from the gas verification system. We have not yet, nor has MMS, determined the net impact of these discrepancies on royalties owed the federal government. The methods and underlying assumptions MMS uses to compare the revenues it collects in kind with what it would have collected in cash do not account for all costs and do not sufficiently deal with uncertainties, raising doubts about the claimed financial benefits of the royalty-in-kind program. Specifically, MMS’s calculation showing that MMS sold the royalty oil and gas for $74 million more than MMS would have received in cash payments did not appropriately account for uncertainty in estimates of cash payments. In addition, MMS’s calculation that early royalty-in-kind payments yielded $5 million in interest was based on assumptions about payment dates and interest rates that could misstate the estimated interest benefit. Finally, MMS’s calculation that the royalty-in-kind program cost about $8 million less to administer than an in-value program did not include significant costs that, if included, could change MMS’s conclusions. MMS sold the oil and gas it collected during the 3 fiscal years 2004 through 2006 for $8.15 billion and calculated that this amount exceeded what MMS would have received in cash royalties by about $74 million—a net benefit of approximately 0.9 percent. MMS has recognized that its estimates of what it would have received in cash payments are subject to some degree of error but has not appropriately evaluated or reported how sensitive the net benefit calculations are to this error. This is important because even a 1 percent error in the estimates of cash payments would change the estimated benefit of the royalty-in-kind program from $74 million to anywhere from a loss of $6 million to a benefit of $155 million. Moreover, MMS’s annual reports to the Congress present oil sales data in aggregate and therefore do not reflect the fact that, in many individual sales, MMS sold the oil it collected in kind for less than it estimates it would have collected in cash. Specifically, MMS estimates that, in fiscal year 2006, it sold 28 million barrels of oil, or 64 percent of all the oil it collected in kind, for less than it would have collected in cash. The government would have received an additional $6 million in revenue if it had taken these royalties in cash instead. These sales indicate that MMS has not always been able to achieve one of its central goals: to select, based on systematic economic analysis, which royalties to take in cash and which to take in kind in a way that maximizes revenues to the government. According to a senior MMS official, the federal government has several advantages when selling gas that it does not have when selling oil, a fact that helps to explain why MMS’s gas sales have performed better than its oil sales. For example, MMS can bundle the natural gas production in the Gulf of Mexico from many different leases into large volumes that MMS can use to negotiate discounts for transporting gas from production sites to market centers. Because purchasers receive these discounts when they buy gas from MMS, they may be willing to pay more for gas from MMS than from the original owners. Opportunities for bundling are less prevalent in the oil market. Because MMS generally does not have this, or other, advantages when selling oil, purchasers often pay MMS about what they would pay other producers for oil, and sometimes less. Indeed, MMS’s policies allow it to sell oil for up to 7.7 cents less per barrel than MMS estimates it would collect if it took the royalties in cash. MMS told us that the other financial benefits of the royalty-in-kind program, including interest payments and reduced administrative costs, justify selling oil for less than the estimated cash payments because once these additional revenues are factored in, the net benefit to the government is still positive. However, as discussed below, we have found that there are significant questions and uncertainties about the other financial benefits as well. Revenues from the sale of royalty-in-kind oil are due 10 days earlier than cash payments, and revenues from the sale of in-kind gas are due 5 days earlier. MMS calculates that the government earned about $5 million in interest from fiscal years 2004 through 2006 from these early payments that it would not have received had it taken royalties in cash. We found two weaknesses in the way MMS calculates this interest. First, the payment dates used to calculate the interest revenue have the potential to over- or underestimate its value. MMS calculates the interest on the basis of the time between the actual date that Treasury received a royalty-in- kind payment and the theoretical latest date that Treasury would have received a cash payment under the royalty-in-value program. However, MMS officials told us that cash payments can, and sometimes do, arrive before their due date. As a result, MMS might be overstating the value of the early royalty-in-kind payments. Second, the interest rate used to calculate the interest revenue may either over- or understate its value because the rate is not linked to any market rate. From fiscal year 2004 through 2007, MMS used a 3 percent interest rate to calculate the time value of these early payments. However, during this time, actual market interest rates at which the federal government borrowed fluctuated. For example, 4-week Treasury bill rates ranged from a low of 0.72 percent to a high of 5.18 percent during this same period. Therefore, during some fiscal years, MMS likely overstated or understated the value of these early payments. MMS has developed procedures to capture the administrative costs of the royalty-in-kind and cash royalty programs and includes in its administrative cost comparison primarily the variable costs for the federal offshore oil and gas activities—that is, costs that fluctuate based on the volume of oil or gas received by MMS, such as labor costs. Although MMS also includes some department-level fixed costs, it excludes some fixed costs that it does not incur on a predictable basis (largely information technology costs). According to MMS, if it included these IT and other such costs, there would be a high potential of skewing the unit price used to determine the administrative cost savings. However, by excluding such fixed costs from the administrative cost comparison, MMS is not including all the necessary cost information to evaluate the efficacy of the royalty-in- kind program. MMS’s administrative cost analysis compares a bundle of royalty-in-kind program administrative costs divided by the number of barrels of oil equivalent realized by the royalty-in-kind program during a year, with a bundle of cash royalty program administrative costs divided by the number of barrels of oil equivalent realized by that program. The difference between these amounts represents the difference in cost to administer a barrel of oil equivalent under each program. MMS then multiplies the difference in cost to administer a barrel of oil equivalent under the two programs by the number of barrels of oil equivalent realized by the royalty-in-kind program to determine the administrative cost savings. However, MMS’s calculations excluded some fixed costs that are not incurred on a regular or predictable basis from the analysis. For example, in fiscal year 2006, royalty-in-kind IT costs of $3.4 million were excluded from the comparison. Moreover, additional IT costs of approximately $29.4 million—some of which may have been incurred for either the royalty-in-kind or the cash royalty program—were also excluded. Including and assigning these IT costs to the programs supported by those costs would provide a more complete accounting of the respective costs of the royalty-in-kind and royalty-in-value programs, and would likely impact the results of MMS’s administrative cost analysis. Ultimately the system used by Interior to ensure taxpayers receive appropriate value for oil and gas produced from federal lands and waters is more of an honor system than we are comfortable with. Despite the heavy scrutiny that Interior has faced in its oversight of royalty management, we and others continue to identify persistent weaknesses in royalty collections. Given both the long-term fiscal challenges the government faces and the increased demand for the nation’s oil and gas resources, it is imperative that we have a royalty collection system going forward that can assure the American public that the government is receiving proper royalty payments. Our work on this issue is continuing along several avenues, including comparing the royalties taken in kind with the value of royalties taken in cash, assessing the rate of oil and gas development on federal lands, comparing the amount of money the U.S. government receives with what foreign countries receive for allowing companies to develop and produce oil and gas, and examining further the accuracy of MMS’s production and royalty data. We plan to make recommendations to address the weaknesses we identified in our final reports on these issues. We look forward to further work and to helping this subcommittee and the Congress as a whole to exercise oversight on this important issue. Mr. Chairman, this concludes our prepared statement. We would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For further information about this testimony, please contact either Frank Rusco, at 202-512-3841, or ruscof@gao.gov, or Jeanette Franzel, at 202-512- 9406, or franzelj@gao.gov. Contact points for our Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Ron Belak, Ben Bolitzer, Lisa Brownson, Melinda Cordero, Nancy Crothers, Glenn C. Fischer, Cindy Gilbert, Tom Hackney, Chase Huntley, Heather Hill, Barbara Kelly, Sandra Kerr, Paul Kinney, Jennifer Leone, Jon Ludwigson, Tim Minelli, Michelle Munn, G. Greg Peterson, Barbara Timmerman, and Mary Welch. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Companies that develop and produce federal oil and gas resources do so under leases administered by the Department of the Interior (Interior). Interior's Bureau of Land Management (BLM) and Offshore Minerals Management (OMM) are responsible for overseeing oil and gas operations on federal leases. Companies are required to self- report their production volumes and other data to Interior's Minerals Management Service (MMS) and to pay royalties either "in value" (payments made in cash), or "in kind" (payments made in oil or gas). GAO's testimony will focus on whether (1) Interior has adequate assurance that it is receiving full compensation for oil and gas produced from federal lands and waters, (2) MMS's compliance efforts provide a check on industry's self-reported data, (3) MMS has reasonable assurance that it is collecting the right amounts of royalty-in-kind oil and gas, and (4) the benefits of the royalty-in-kind program that MMS has reported are reliable. This testimony is based on ongoing work. When this work is complete, we expect to make recommendations to address these and other findings. To address these issues GAO analyzed MMS data, reviewed MMS, and other agency policies and procedures, and interviewed officials at Interior. In commenting on a draft of this testimony, Interior provided GAO technical comments which were incorporated where appropriate. Interior lacks adequate assurance that it is receiving full compensation for oil and gas produced from federal lands and waters because Interior's Bureau of Land Management (BLM) and Offshore Minerals Management (OMM) are not fully conducting production inspections as required by law and agency policies and because MMS's financial management systems are inadequate and lack key internal controls. Officials at BLM told us that only 8 of the 23 field offices in five key states we sampled completed their required production inspections in fiscal year 2007. Similarly, officials at OMM told us that they completed about half of the required production inspections in calendar year 2007 in the Gulf of Mexico. In addition, MMS's financial management system lacks an automated process for routinely and systematically reconciling production data with royalty payments. MMS's compliance efforts do not consistently examine third-party source documents to verify whether self-reported industry royalty-in-value payment data are complete and accurate, putting full collection of royalties at risk. In 2001, to help meet its annual performance goals, MMS moved from conducting audits, which compare self-reported data against source documents, toward compliance reviews, which provide a more limited check of a company's self-reported data and do not include systematic comparison to source documentation. MMS could not tell us what percentage of its annual performance goal was achieved through audits as opposed to compliance reviews. Because the production verification processes MMS uses for royalty-in-kind gas are not as rigorous as those applied to royalty-in-kind oil, MMS cannot be certain it is collecting the gas royalties it is due. MMS compares companies' self-reported oil production data with pipeline meter data from OMM's oil verification system, which records oil volumes flowing through metering points. While analogous data are available from OMM's gas verification system, MMS has not chosen to use these third-party data to verify the company-reported production numbers. The financial benefits of the royalty-in-kind program are uncertain due to questions and uncertainties surrounding the underlying assumptions and methods MMS used to compare the revenues it collected in kind with what it would have collected in cash. Specifically, questions and uncertainties exist regarding MMS's methods to calculate the net revenues from in-kind oil and gas sales, interest payments, and administrative cost savings.
The Air Force and the Navy plan to use the JPATS aircraft to train entry level Air Force and Navy student pilots in primary flying to a level of proficiency from which they can transition into advanced pilot training. The JPATS aircraft is designed to replace the Air Force’s T-37B and the Navy’s T-34C primary trainer aircraft and other training devices and courseware. It is expected to have a life expectancy of 24 years and provide better performance and improved safety, reliability, and maintainability than existing primary trainers. For example, the JPATS aircraft is expected to overcome certain safety issues with existing trainers by adding an improved ejection seat and a pressurized cockpit. The JPATS aircraft is expected to be more reliable than existing trainers, experiencing fewer in-flight engine shutdowns and other equipment failures. It is also expected to be easier to maintain because it is to use more standard tools, and common fasteners. To calculate the number of JPATS aircraft required, the Air Force and the Navy in 1993, used a formula that considered such factors as the aircraft utilization rate, annual flying hours, mission capable rate, attrition rate, sortie length, working days, and turnaround time. The Air Force calculated a need for 372 JPATS aircraft, and the Navy calculated a need for 339, for a total combined requirement of 711 JPATS aircraft. In December 1996, the two services reviewed these requirements. At that time, the Navy approved an increase of 29 aircraft, increasing its total to 368 aircraft. This increased total requirements from 711 to 740 JPATS aircraft. The Air Force’s Air Education and Training Command—responsible for pilot training—determined that the Air Force would need 441 aircraft instead of 372 aircraft. However, the Air Force did not approve this increase. The JPATS aircraft shown in figure 1, the T-6A Texan II, is to be a derivative of the Pilatus PC-9 commercial aircraft. Raytheon Aircraft Company, the contractor, plans to produce the aircraft in Wichita, Kansas, under a licensing agreement with Pilatus, the Swiss manufacturer of the PC-9. The JPATS aircraft will undergo limited modification to incorporate several improvements and features that are not found in the commercial version of the aircraft, but are required by the Air Force and the Navy. Modifications involve (1) improved ejection seats, (2) improved birdstrike protection, (3) a pressurized cockpit, (4) an elevated rear (instructor) seat, and (5) flexibility to accommodate a wider range of male and female pilot candidates. These modifications are currently being tested during the qualification test and evaluation phase, which is scheduled to be completed in November 1998. Initial operational capability is planned for fiscal year 2001 for the Air Force and fiscal year 2003 for the Navy. The Air Force and the Navy competitively selected an existing commercial aircraft design to satisfy their primary trainer requirements instead of developing a new trainer aircraft. This competitive acquisition strategy, according to Air Force officials, resulted in original program estimates of about $7 billion being reduced to about $4 billion upon contract award. The Air Force, as executive agent for the program, awarded a contract to Raytheon in February 1996 to develop and produce between 102 and 170 JPATS aircraft with the target quantity of 140, along with simulators and associated ground based training system devices, a training management system, and instructional courseware. The contract included seven production options. Through fiscal year 1997, the Air Force has exercised the first four options, acquiring 1 aircraft for engineering and manufacturing development and 23 production aircraft. A separate contract was awarded to Raytheon for logistics support, with options for future years’ activities. Production is scheduled to continue through 2014. In 1996, the Air Force and the Navy calculated the number of JPATS aircraft required using several factors, including projections of JPATS mission capable rates and projected attrition rates based on historical experience. However, the data they used in their calculations contained various inconsistencies. For example, the projections of JPATS aircraft mission capable rates of 91 percent and 80 percent used by the Air Force and the Navy, respectively, to calculate the requirements differed substantially from each other and from the 94-percent rate included in the contract for procurement of the aircraft. The result of using lower mission capable rates to calculate aircraft quantities is that more aircraft would be needed to achieve annual flying hour requirements for training than if higher rates were used. Furthermore, the Air Force’s projected attrition rates were not consistent with historical attrition experience with its existing primary trainer, and the Navy used a rate that differs from the rate that DOD now says is accurate. Until these inconsistencies are resolved, it is unclear how many JPATS aircraft should be procured. Although the Air Force and the Navy are procuring the same JPATS aircraft to train entry level pilots and the aircraft will be operated in a joint training program, they used different mission capable rates to calculate aircraft requirements. Specifically, the Air Force used a 91-percent mission capable rate and the Navy used an 80-percent rate. Neither of these rates is consistent with the JPATS contract that requires Raytheon to build an aircraft that meets or exceeds a 94-percent mission capable rate. Therefore, we recalculated the Air Force and the Navy total JPATS aircraft requirements using the same formula as the Air Force and the Navy, and substituting the 94-percent contract mission capable rate in place of the rates used by the Air Force and the Navy. Table 1 shows how higher mission capable rates could decrease JPATS aircraft quantity requirements by as many as 60 aircraft—10 for the Air Force and 50 for the Navy. The attrition rate used by the Air Force to calculate the number of JPATS aircraft needed was more than twice the attrition rate of its current primary trainer that was placed in service in the late 1950s. The Air Force estimated that 1.5 JPATS aircraft would be lost or damaged beyond repair for every 100,000 flying hours. However, the historic attrition rate for the current primary trainer is 0.7 per 100,000 flying hours. Although DOD advised us that single-engine trainers such as JPATS are expected to have higher attrition rates than two-engine trainers such as the T-37B, we note that important JPATS features are increases in safety and reliability, including fewer in-flight engine shutdowns and other equipment failures. In addition, use of an advanced ground based training system, being acquired as part of the JPATS program, is expected to result in greater pilot familiarity with the aircraft’s operation prior to actual flights. Data provided by the Navy and DOD regarding attrition rates are conflicting. For example, the Navy’s calculations in 1996 used an attrition rate of 1.5 aircraft per 100,000 flight hours to calculate the required quantity of JPATS aircraft. To derive this rate, the Navy factored in the attrition experience of the existing T-34C trainer, using a lifetime attrition rate of 0.4 per 100,000 flight hours. However, in commenting on a draft of this report, DOD stated that the lifetime attrition rate for the T-34C is 2.1 aircraft per 100,000 flying hours and the Navy provided data that it believed supported this rate. However, our analysis showed that the data supported a rate of 3.6 aircraft per 100,0000 flying hours, which differs from both the Navy and DOD figure. The JPATS aircraft procurement plan does not take advantage of the most favorable prices provided by the contract. The contract includes annual options with predetermined prices for aircraft orders of variable quantities. Procurement of fewer than the target quantity can result in a unit price increase from 1 to 52 percent. Procurement above the target quantity, or at the maximum quantity, however, provides very little additional price reduction. The contract contains unit price charts for the variation in quantities specified in lots II through VIII. The charts contain pricing factors for various production lot quantity sizes that are used in calculating unit prices based on previous aircraft purchases. The charts are designed so that the unit price increases if the number of aircraft procured are fewer than target quantities and decreases if quantities procured are more than target quantities. As shown in table 2, lots II through IV have been exercised at the maximum quantities of 2 (plus 1 developmental aircraft), 6, and 15. According to the procurement plan, 18 aircraft are to be procured during fiscal year 1998 and 12 aircraft during fiscal year 1999, resulting in a total of 30 aircraft. All of these aircraft are being procured by the Air Force. In fiscal year 2000, the Navy is scheduled to begin procuring JPATS aircraft. Our analysis shows that DOD can make better use of the price advantages that are included in the JPATS contract. For example, as shown in table 3, 30 aircraft can be procured more economically if 16, rather than 18, aircraft are procured in fiscal year 1998 and 14, rather than 12, aircraft are procured in fiscal year 1999. If as few as 16 aircraft were procured in fiscal year 1998, they could be acquired at the same unit price as currently planned because the unit price would not increase until fewer than 16 JPATS aircraft were procured in fiscal year 1998. Deferring 2 aircraft from fiscal year 1998 to fiscal year 1999 would increase the quantity in fiscal year 1999 from 12 to 14, resulting in a reduction of the unit price for fiscal year 1999, from $2.905 million to $2.785 million. This deferral would not only save $1.360 million over the 2 years but also reduce the risk of buying aircraft before the completion of operational testing by delaying the purchase of two aircraft and permitting more testing to be completed. DOD could also save money if it altered its plans to procure 26 aircraft in fiscal year 2000, which is a quantity lower than the target of 32 aircraft. The unit price could be reduced by $104,212, or 4 percent, if DOD procured the target quantity. In addition, once the JPATS aircraft successfully completes operational test and evaluation, the aircraft could be procured at the more economical, or target, rates. Our analysis demonstrates that maintaining yearly production rates at least within the target range is more economical than production rates in the minimum range. As we previously reported, economical procurement of tested systems has often been hindered because DOD did not provide them with high enough priority. The JPATS cockpit is expected to meet DOD’s requirement that it accommodate at least 80 percent of the eligible female pilot population. Pilot size, as defined by the JPATS anthropometric characteristics, determines the percentage of pilots that can be accommodated in the JPATS cockpit. JPATS program officials estimate that the planned cockpit dimensions will accommodate approximately 97 percent of the eligible female population anthropometrically. The minimum design weight of the JPATS ejection seat (116 pounds) will accommodate 80 percent of the eligible female population. Because concerns have been raised about the ability of JPATS aircraft to accommodate female pilots, Congress directed DOD to conduct studies to determine the appropriate percentage of male and female pilots that could be accommodated in the cockpit. A DOD triservice working group studied the issue and concluded that a 32.8-inch minimum sitting height, instead of 34 inches, is one of several variables that would allow for accommodation of at least 80 percent of the eligible female population. The DOD working group determined that this change in sitting height would not require major development or significantly increase program risk. Thus, the Office of the Secretary of Defense established 32.8 inches as the new JPATS minimum sitting height requirement. In addition, the minimum weight requirement for the JPATS ejection seat was lowered from 135 pounds to 116 pounds to accommodate 80 percent of the eligible female population. Another study is being conducted to investigate the potential, at minimum additional cost, for an ejection seat with a lighter minimum weight limit that might accommodate more than 80 percent of the female pilot trainee population. Phase one of that study is scheduled to be completed in the fall of 1997. DOD is proceeding with plans to procure a fleet of JPATS aircraft that may exceed the quantity needed to meet training requirements. Until inconsistencies in the data used to calculate JPATS requirements are resolved, it is unclear how many aircraft should be procured. Furthermore, DOD’s schedule for procuring the aircraft does not take advantage of the most economical approach that would allow it to save money and permit more time for operational testing. We, therefore, recommend that the Secretary of Defense determine the appropriate attrition rates and mission capable rates to calculate JPATS requirements, taking into account the planned improvements in JPATS safety, reliability, and maintainability, and recalculate the requirements as appropriate and direct the Air Force to revise the JPATS procurement plan to take better advantage of price advantages in the contract, and upon successful completion of operational test and evaluation, acquire JPATS aircraft at the most economical target quantity unit prices provided by the contract. In commenting on a draft of this report, DOD did not agree with our conclusion that DOD overstated JPATS requirements or with our recommendation that the Secretary of Defense direct the Air Force and the Navy to recalculate aircraft requirements. DOD partially concurred with our recommendation to buy JPATS aircraft at the most economical target unit prices provided in the contract. DOD believed that the Air Force and the Navy used appropriate attrition rates and mission capable rates to calculate JPATS requirements and that these rates accounted for improvements in technology and mechanical reliability. It noted that we had incorrectly identified the T-34C aircraft attrition rate as 0.4 aircraft per 100,000 flying hours rather than 2.1 aircraft per 100,000 flying hours. The Navy provided data that it believed supported DOD’s position, but our analysis showed that this data supported an attrition rate that differed from both the Navy and DOD rate. Furthermore, DOD stated that the 94-percent mission capable rate cited in the JPATS contract is achievable only under optimal conditions and that the lower mission capable rates used by the Air Force and the Navy are based on the maximum possible aircraft use at the training sites. Although DOD stated that the Navy used a mission capable rate of 87 percent, our analysis showed that the Navy used a rate of 80 percent. Because of the inconsistencies and conflicts in the attrition and mission capable rate data between DOD and the services, we revised our conclusion to state that, until these discrepancies are resolved, it is unclear how many aircraft should be procured and revised our recommendation to call for the Secretary of Defense to determine the appropriate rates and recalculate JPATS requirements as appropriate. DOD agreed that procuring aircraft at the most economical price is desirable and stated that it will endeavor to follow this approach in future JPATS procurement. It, however, noted that competing budget requirements significantly affect procurement rates of all DOD systems and that limited resources generally make procurement at the most economical rates unachievable. DOD’s written comments are reprinted in appendix I. To review service calculations of JPATS requirements, DOD’s procurement schedule for the aircraft, and efforts to design the JPATS cockpit to accommodate female pilots, we interviewed knowledgeable officials and reviewed relevant documentation at the Office of the Under Secretary of Defense (Acquisition and Technology) and the Office of the Secretary of the Air Force, Washington D.C.; the Training Systems Program Office, Wright-Patterson Air Force Base, Ohio; the Air Force Air Education and Training Command, Randolph Air Force Base, Texas; the Navy Chief of Naval Air Training Office, Corpus Christi, Texas; and the Raytheon Aircraft Company, Wichita, Kansas. We examined Air Force and Navy justifications for using specific attrition rates, mission capable rates, and flying hour numbers in determining aircraft quantities. We also analyzed the variation in quantity unit price charts in the procurement contract to determine the most economical way to procure JPATS aircraft. In addition, we reviewed congressional language on cockpit accommodation requirements and current program estimates of compliance with that requirement. This review was conducted from September 1996 to July 1997 in accordance with generally accepted government auditing standards. As the head of a federal agency, you are required under 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight no later than 60 days after the date of this report. A written statement must also be submitted to the Senate and House Committees on Appropriations with an agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to the Secretaries of the Navy and the Air Force and to interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 521-4587 if you or your staff have any questions concerning this report. The major contributors to this report were Robert D. Murphy, Myra A. Watts, and Don M. Springman. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated July 17, 1997. 1. The Navy, in deriving the projected attrition rate of 1.5 aircraft losses per 100,000 flying hours for Joint Primary Aircraft Training System (JPATS) aircraft, used a 0.4-lifetime attrition rate for the T-34C in determining total aircraft requirements. DOD, in its response to our draft of this report, stated that the actual lifetime attrition rate for the T-34C is 2.1; however, the data provided to support that rate indicated an attrition rate of 3.6 aircraft per 100,000 flying hours. Because the attrition rate figures provided to us for the Navy’s T-34 differ substantially, the Air Force’s estimated attrition for JPATS aircraft is twice the rate experienced on the T-37, and the Air Force’s Air Education and Training Command has revised its calculations of requirements, we believe reassessment of requirements for JPATS aircraft is needed. 2. The JPATS production contract specifies the aircraft shall meet or exceed a 94-percent mission capable rate for the total hours the aircraft is in the inventory and does not specify the severity of conditions. Although the Navy now maintains that its requirement was for a primary trainer aircraft with an 87-percent mission capable rate, the Navy used, and continues to use, an 80-percent mission capable rate in calculating JPATS aircraft quantity requirements. The latest JPATS Operational Requirements Document, issued December 1996, shows an 80-percent mission capable rate for the Navy, not 87 percent as indicated in DOD’s response to our draft report. 3. We recognize that limited resources and competing budget requirements affect production rates; however, the point we made was that DOD’s procurement plan (the future years defense plan) for acquisition of JPATS aircraft did not make the best use of the limited resources that had already been assigned to the JPATS program. Our report, on page 6, illustrates how, with fewer resources, the Air Force could have acquired the same number of aircraft over a 2-year period. The illustration is valid, in that it shows that the DOD procurement plan was not the most effective and that it should be reassessed. Indeed, the procurement quantities in the plan for fiscal years 1999 and 2000 continue to include insufficient quantities for DOD to take advantage of the most favorable prices in the contract, and without a reassessment and a change to the plan, Congress may need to ensure that resources are used most effectively. 4. DOD did not provide us information to show how historical data for single-engine trainer aircraft were used to predict the JPATS rate of 1.5 losses per 100,000 flight hours. We believe that a predicted attrition rate for JPATS aircraft that is twice that of 40-year old T-37 trainers does not account for improvements that are to be incorporated in JPATS aircraft. 5. We do not believe it is premature at this time to reassess JPATS requirements. We believe reassessment is needed now because the Navy has provided several different attrition rates, all of which are intended to represent T-34 historical experience; the proposed JPATS attrition rate is twice the historical rate of the Air Force T-37; and the Air Force and the Navy continue to project different mission capable rates for JPATS aircraft that are lower than the rate the aircraft is required to demonstrate under the contract. We agree that, as experience is gained with the JPATS aircraft, the quantities should also be reassessed periodically. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed: (1) the Air Force's and Navy's calculations of the quantity of Joint Primary Aircraft Training System (JPATS) aircraft needed to meet training requirements; (2) the impact of the Department of Defense's (DOD) procurement schedule on the aircraft's unit price; and (3) service efforts to design the JPATS cockpit to accommodate female pilots. GAO noted that: (1) the Air Force and the Navy used inconsistent data to calculate the number of JPATS aircraft required for primary pilot training; (2) the Air Force used an attrition rate that was twice as high as the historical attrition rate for its existing primary trainer and the Navy used an attrition rate that differs from the rate that DOD now cites as accurate; (3) until inconsistencies in the mission capable rates and attrition rates are resolved, it is unclear how many JPATS aircraft should be procured; (4) DOD's procurement plan for acquiring JPATS aircraft does not take full advantage of the most favorable prices available in the contract; (5) for example, the plan schedules 18 aircraft to be procured during fiscal year (FY) 1998 and 12 aircraft during FY 1999, a total of 30 aircraft; (6) however, GAO found that these 30 aircraft could be procured more economically if 16 rather than 18 aircraft are procured in FY 1998 and 14 rather than 12 aircraft are procured in FY 1999; (7) this approach would save $1.36 million over the 2 fiscal years and permit more operational testing and evaluation to be completed; (8) furthermore, the procurement plan does not schedule a sufficient number of JPATS aircraft for procurement in fiscal year 2000 to achieve lower prices that are available under the terms of the contract; (9) because concerns had been raised about the ability of JPATS aircraft to accommodate female pilots, Congress directed DOD to study and determine the appropriate percentage of the female pilot population that the aircraft should physically accommodate; (10) based on its studies, DOD established the requirement that the JPATS aircraft be able to accommodate 80 percent of the eligible female pilot population; (11) pilot size determines the percentage of pilots that can be accommodated in the JPATS cockpit; (12) planned cockpit dimensions are expected to accommodate about 97 percent of the eligible female pilot population; and (13) to permit safe ejection from the aircraft, the ejection seat minimum pilot weight is 116 pounds, which is expected to accommodate 80 percent of the eligible female pilot population.
Established in 1994 to serve as the UN’s main oversight body, OIOS operated with a budget of $63.9 million in fiscal biennium 2004-2005 and employs a workforce of 256 staff in 20 locations around the world. OIOS was established to assist the Secretary-General in fulfilling internal oversight responsibilities over UN resources and staff. The stated mission of OIOS is “to provide internal oversight for the United Nations that adds value to the organization through independent, professional, and timely internal audit, monitoring, inspection, evaluation, management consulting, and investigation activities and to be an agent of change that promotes responsible administration of resources, a culture of accountability and transparency, and improved program performance.” The office is headed by an Under Secretary-General who is appointed by the Secretary-General for a 5-year fixed term with no possibility of renewal. The Under Secretary-General may be removed by the Secretary-General only for cause and with General Assembly approval. OIOS’s authority spans all UN activities under the Secretary-General, namely the UN Secretariat in New York, Geneva, Nairobi, and Vienna; the five regional commissions for Africa, Asia and the Pacific, West Asia, Europe, and Latin America and the Caribbean; peacekeeping missions and humanitarian operations in various parts of the world; and numerous UN funds and programs, such as the United Nations Environment Program (UNEP), United Nations Human Settlements Program (UN-HABITAT), and the Office of the United Nations High Commissioner for Refugees (UNHCR). The UN Procurement Service is located within the UN Secretariat’s Department of Management, at the UN headquarters in New York City. As of August 2005, the Procurement Service had a staff of 71 personnel that included 21 procurement officers, 5 associate procurement officers, and 24 procurement assistants. The Procurement Service manages procurement for the UN Secretariat headquarters and procures air services and items costing more than $200,000 for the Secretariat’s peacekeeping and other field missions. While it also provides procurement guidance and support to UN personnel who procure goods and services in the Secretariat’s peacekeeping and other field missions, it does not directly supervise them. Overall Secretariat procurement spending tripled between 1994 and 2004 as the number of UN peacekeepers in the field increased. Past reviews of the UN’s procurement system have identified problems in each of the six commonly identified characteristics that define an effective procurement system. In 1999, we found that the United Nations had yet to fully address these problems. The United Nations has retained a consulting firm to assess the Procurement Service’s financial and internal controls, following the decision of a former procurement official to plead guilty to federal charges involving his receipt of funds from firms seeking UN contracts. The Procurement Service is currently under interim management. The ability of OIOS to carry out independent, effective oversight is impeded by the UN’s budgeting processes for the regular budget and funds from the assessed peacekeeping budget and voluntary contributions from funds and programs subject to OIOS’s oversight, which are known as extrabudgetary resources. The Secretariat’s budget office—over which OIOS has oversight authority—exercises control over OIOS’s regular budget, which may result in potential conflicts of interest. UN budgeting processes make it difficult for OIOS to shift resources among OIOS locations or divisions to meet changing priorities. In addition, OIOS lacks control over extrabudgetary resources from UN funds and programs. OIOS negotiates its terms of work and payment for services with the manager of the program it intends to examine. However, if the entity does not agree to an OIOS examination or does not provide requested funding for OIOS to perform its work, OIOS cannot adequately perform its oversight responsibilities. These regular budget and extrabudgetary processes have impeded OIOS in its ability to complete some of its identified oversight priorities and raise serious questions about OIOS’s independence. OIOS’s ability to carry out independent, effective oversight is hindered by the UN’s budgeting processes for its regular budget. A General Assembly resolution in the creation of OIOS stated that the new internal oversight body shall exercise operational independence and that the Secretary- General, when preparing the budget proposal for OIOS, should take into account the independence of the office. OIOS receives its funding from two sources, the UN’s regular budget and funds from the assessed peacekeeping budget and voluntary contributions from funds and programs subject to OIOS’s oversight—known as extrabudgetary resources. UN rules and regulations preclude the movement of funds between the regular budget and extrabudgetary resources. From 1996-1997 through 2004-2005, the number of staff funded by the regular budget has been relatively flat while the number funded by extrabudgetary resources has increased steadily. During that same period, OIOS’s regular budget increased from about $15 million to about $24 million; its extrabudgetary resources grew from about $7 million to about $40 million. OIOS officials stated that the increase in extrabudgetary resources has been largely due to growth in oversight for UN peacekeeping (see fig. 1). The Secretariat’s budget office—over which OIOS has oversight authority—exercises control over OIOS’s regular budget, which infringes on the office’s independence. In developing the biennium budget, the Under Secretary-General of OIOS sends its biennial budget request to the Secretariat’s budget office for review. While OIOS can negotiate with the budget office on suggested changes to its budget proposal, it has limited recourse regarding the budget office’s decisions. This process limits OIOS’s ability to independently request from the General Assembly the resources it has determined it needs to provide effective oversight and also poses a conflict of interest. UN budgeting processes make it difficult for OIOS to reallocate its regular budget resources among OIOS locations worldwide or among its three divisions—internal audit; investigations; and monitoring, evaluation, and consulting services—to meet changing priorities. For example, OIOS officials requested a reallocation of 11 investigative posts from New York to Vienna to save travel funds and be closer to the entities that they investigate. This change was approved only after repeated requests by OIOS over a number of years. In addition, although OIOS officials stated that they routinely identify high-risk and high-priority audits and investigations after the biennial budget has been determined, it is difficult to mobilize the resources required to carry out the work. OIOS officials have stated that the office is under resourced, but they do not have a mechanism in place to determine appropriate staffing levels or to justify budget requests, except for peacekeeping oversight services. Officials provided us with several examples of work funded through the regular budget that they were unable to undertake due to resource constraints. For example, although the Board of Auditors recommended on several occasions that OIOS develop the capacity to perform more information technology audits, OIOS officials said that they lack the financial and human resources to do so. We found that OIOS does not have a mechanism in place to determine its overall staffing requirements, which would help justify requests for additional resources. On the other hand, for peacekeeping audit services, OIOS has a metric—endorsed by the General Assembly—that provides one professional auditor per $100 million annual peacekeeping budget. Although OIOS has succeeded in justifying increases for peacekeeping oversight services consistent with the large increase in the peacekeeping budget since 1994, it has been difficult to support staff increases in oversight areas that lack a comparable metric. OIOS lacks control over the extrabudgetary resources funded by UN funds and programs. OIOS’s reliance on extrabudgetary resources has been steadily increasing over the years—from 30 percent in fiscal biennium 1996-1997 to 62 percent in the last fiscal biennium, 2004-2005 (see fig. 1). According to OIOS officials, the growth in the office’s budget is primarily due to extrabudgetary resources for audits and investigations of peacekeeping. OIOS is dependent on UN funds and programs and other UN entities for resources, access, and reimbursement for the services it provides. Heads of these entities have the right to approve or deny funding for oversight work proposed by OIOS. By denying OIOS funding, UN entities could avoid OIOS audits, and high-risk areas could potentially be excluded from adequate examination. For example, we have previously testified that the UN Office of the Iraq Program refused to fund an OIOS risk assessment of its program management division. UN funds and programs limit OIOS’s ability to independently set its work priorities in that they exercise the power to decide and negotiate whether to fund OIOS oversight activities and at what level. OIOS officials stated that because of OIOS’s budgetary structure, some high-priority work has not been undertaken. For example, according to a senior OIOS official, the office has not been able to adequately oversee the extrabudgetary activities of many entities, particularly those based in Geneva. In addition, the office also has not been able to oversee the UN Framework Convention on Climate Change or the UN Convention to Combat Desertification. OIOS reports that if funding is not forthcoming, OIOS may have no choice but to discontinue auditing certain extrabudgetary activities as of January 2006. In addition, in some cases, the fund and program managers have disputed the fees OIOS charges for its services. For example, 40 percent of the $2 million billed by OIOS after it completed its work are currently in dispute, and since 2001, less than half of the entities have paid OIOS in full for investigative services it has provided. If the funds and programs fail to pay or pay late, this also adversely affects OIOS’s resources. The United Nations has improved the clarity of its procurement manual, which now provides procurement staff with clearer guidance on UN procurement regulations and policies. However, the United Nations has yet to fully address several problems affecting the openness and professionalism of its procurement system. Specifically, concerns remain relating to an independent bid protest system, staff training and professional certification, and ethics regulations. The UN Procurement Service has improved the clarity of its procurement manual. The manual guides UN staff in their conduct of procurement actions worldwide. The Procurement Service is responsible for ensuring that the manual reflects financial rules and regulations and relevant Secretary-General bulletins and administrative instructions. An open and effective procurement system requires the establishment of processes that are available to and understood by vendors and procurement officers alike. In 1999, we reported that the manual did not provide detailed discussions of procedures and policies. The United Nations has addressed these concerns in the current manual, which was endorsed by a group of outside experts. The manual now has detailed step-by-step instructions of the procurement process for both field and headquarters staff, including illustrative flow charts. It also includes more guidance that addresses headquarters and field procurement concerns, such as delegations of procurement authority, more specific descriptions of short- and long-term acquisition planning, and the evaluation of requests for proposals valued at more than $200,000. However, the United Nations has yet to add a section addressing renovation construction to the manual. The United Nations has not heeded a 1994 recommendation by a group of independent experts to establish an independent vendor bid protest process as soon as possible. As a result, UN vendors cannot protest the Procurement Service’s handling of their bids to an independent official or office. We reported in 1999 that such a process is an important aspect of an open procurement system. A process that allows complaints to be adjudicated by an independent office could promote the openness and integrity of the process and help alert senior UN officials to the failure of procurement staff to comply with stated procedures. The current UN bid protest process is not independent of the Procurement Service. Vendors are directed to file protests through a complaints ladder process that begins with the acting Chief of the Procurement Service and moves through his immediate supervisor, currently the Secretariat Controller. The Chief of the Procurement Service at the time of our audit stated that there is no vendor demand for an independent process. In contrast to the UN’s approach to bid protest, the U.S. government provides vendors with an agency-level bid protest process. In addition, it also provides vendors with two independent bid protest forums. Vendors dissatisfied by a U.S. agency’s handling of their bids may protest to the U.S. Court of Federal Claims or to the GAO. The GAO receives more than 1,100 such protests annually. Several other countries provide vendors with independent bid protest processes. Moreover, the United Nations has endorsed the use of similar independent bid protest mechanisms by other member nations. In 1994, the General Assembly approved a model procurement law that incorporates independent bid protest. In its comments on the law it drafted for the General Assembly, the UN Commission on International Trade Law stressed that a bid protest review body should be independent of the procuring entity. The United Nations has not fully addressed longstanding concerns regarding the qualifications of its procurement staff. Although the Procurement Service is developing new training curricula and has initiated a certification program, most Procurement Service staff at headquarters have not been professionally certified. In addition, it has yet to determine the extent to which its training budget may need to increase to pay for these new programs. A June 2005 report by a UN contractor indicated that most UN headquarters Procurement Service officers and assistants have not been certified as to the level of their procurement experience and education. The report indicated that only 3 of the 20 procurement officers and 21 procurement assistants responding to a survey had been certified by a recognized procurement certification body. The contractor informed us that the UN’s level of certification was low compared with the level at other organizations. Their report concluded that it was imperative that more UN procurement staff be certified. Although the experts did not address the qualifications of field procurement staff, UN headquarters officials informed us that field procurement staff are also in need of more training. Concerns regarding the qualifications of UN procurement staff are longstanding. Since 1994, international procurement experts reported that UN procurement staff were largely ill-trained or inexperienced and very few had any recognized procurement qualifications. In 1998, OIOS reported that the majority of the 20 professional headquarters procurement staff had procurement-related experience and were qualified. However, OIOS also stated that field procurement staff had limited procurement-related experience. In addition, OIOS reported that field-level procurement practices were at high risk due to continuing problems in training peacekeeping procurement staff in the field. In 1999, we reported that the UN Secretariat procurement training program did not constitute a comprehensive curriculum to provide a continuum of basic to advanced skills for the development of procurement officers. The UN Procurement Service has acknowledged that more needs to be done to train and certify procurement staff. It has developed basic and advanced training curricula and provided training to staff at headquarters and in the field during 2004 and 2005. However, although the training curriculum covers several key topics in procurement, it does not provide in-depth or comprehensive coverage of some topics. For example, the training material does not include in-depth information on procurement negotiation, which is of particular importance in light of UN plans to renovate its headquarters complex. UN data indicates that 1-week basic procurement training courses are being provided to some field staff and that advanced procurement training courses are being provided to some headquarters and field staff. However, it is not clear whether all procurement staff have received training in both the basic and advanced curricula. The Procurement Service has also begun developing a mandatory certification program. Procurement officials stated that their goal is to secure certification of all staff within 5 years. As part of this certification program, UN officials will train procurement staff to serve as trainers. According to UN officials, the curriculum for the trainers has been finalized and the UN has trained some staff as trainers. However, these staff have yet to receive the certification they need before they can train UN procurement staff. Other uncertainties relating to the certification program exist, including the number of staff who will require certification training and the type of certification they will receive. Current budget levels for procurement training may not be adequate to support the new training efforts. A June 2005 report by the National Institute of Governmental Purchasing, advised the Procurement Service that its training budget was not sufficient to maintain the levels of training needed to ensure fair, open, and ethical procurement operations. It recommended that the Procurement Service seek a larger budget. However, the budget’s adequacy cannot be assessed because the Procurement Service has not yet developed training profiles for each staff member. Without assessments of individual training needs, the Procurement Service cannot accurately determine the funds it requires to address those needs. The United Nations has not yet acted on several proposals forwarded by the UN Procurement Service to clarify ethics regulations for procurement staff. Although the United Nations has established general ethics rules and regulations for all UN staff, the Secretary-General has directed that additional rules be developed for procurement officers concerning their status, rights, and obligations. The General Assembly also has asked the Secretary-General to issue ethics guidelines for procurement staff without delay. Several draft rules and regulations specifically addressing procurement ethics are waiting internal review or approval; no firm dates have been set for their promulgation. If approved, the documents would emphasize that all procurement activities in the Secretariat are conducted in the sole interest of the United Nations. They include a declaration of ethics responsibilities for procurement staff, a code of conduct for vendors, and an outline of the policies in current rules and regulations as they relate to procurement staff. The proposed policies would also address ethics standards on conflict of interest and acceptance of gifts for procurement staff, and outline UN rules, regulations, and procedures for suppliers of goods and services to the United Nations. Mr. Chairman, this completes my prepared statement. I will be happy to address any questions you may have. For further information, please contact Thomas Melito at (202) 512-9601. Individuals making key contributions to this testimony included Assistant Director Phyllis Anderson, Jaime Allentuck, Jeffrey Baldwin-Bott, Richard Boudreau, Lynn Cothern, Timothy DiNapoli, Etana Finkler, Jackson Hufnagle, Kristy Kennedy, Clarette Kim, John Krump, Joy Labez, Jim Michels, Valérie Nowak, Barbara Shields, and Pierre Toureille. United Nations: Sustained Oversight Is Needed for Reforms to Achieve Lasting Results, GAO-05-392T (Washington, D.C.: Mar. 2, 2005). United Nations: Oil for Food Program Audits, GAO-05-346T (Washington, D.C.: Feb. 15, 2005). United Nations: Reforms Progressing, but Comprehensive Assessments Needed to Measure Impact, GAO-04-339 (Washington, D.C.: Feb. 13, 2004). United Nations: Progress of Procurement Reforms, GAO/NSIAD-99-71 (Washington, D.C., Apr. 15, 1999). (320394) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses internal oversight and procurement in the United Nations (UN). The findings of the Independent Inquiry Committee into the UN's Oil for Food Program have rekindled long-standing concerns about internal oversight and procurement at the United Nations. In addition, the UN's 2005 World Summit called for a host of reforms, from human rights, terrorism, and peace-building to economic development and management reform. Specifically, this testimony conveys our preliminary observations on the UN's budgeting processes as it relates to the ability of the UN's Office of Internal Oversight Services (OIOS) to perform independent and effective oversight. We will also discuss some of the UN's efforts to address problems affecting the openness and professionalism of its procurement system. We would like to emphasize that our comments reflect the preliminary results of two ongoing GAO engagements examining the UN's internal oversight and procurement services. We will issue reports covering a wide range of issues in both areas early next year. OIOS's ability to carry out independent, effective oversight over UN organizations is hindered by the UN's budgeting processes for its regular and extrabudgetary resources. In establishing OIOS in 1994, the General Assembly passed a resolution stating that OIOS should exercise operational independence in carrying out its oversight responsibilities and that the Secretary-General should take into account the independence of the office in preparing its budget. The Secretariat's budget office--over which OIOS has oversight authority--exercises control over OIOS's regular budget, which may result in conflicts of interest and infringe on OIOS's independence. In addition, UN budgeting processes make it difficult for OIOS to reallocate its regular budget resources among OIOS locations worldwide or among its three oversight divisions--internal audit; investigations; and monitoring, evaluation, and consulting services--to meet changing priorities. OIOS officials provided us with several examples of work funded through the regular budget that they were unable to undertake due to resource constraints. In addition, OIOS does not have a mechanism in place to identify or justify OIOS-wide staffing needs, except for peacekeeping oversight services. OIOS also lacks control over its extrabudgetary resources, which comprise 62 percent of its overall budget in fiscal biennium 2004-2005. The UN funds and programs that OIOS oversees maintain control over some of these resources since OIOS negotiates its terms of work and payment for services with the managers of the programs it intends to examine. If the entity does not agree to an OIOS examination or does not provide sufficient funding for OIOS to carry out its work, OIOS cannot adequately perform its oversight responsibilities. The United Nations has made progress in improving the clarity of its procurement manual but has yet to fully address previously identified concerns affecting the lack of openness and professionalism of its procurement system. Specifically, concerns remain relating to an independent bid protest system, staff training and professional certification, and ethics regulations. In 2002, the United Nations revised its procurement manual, which now provides staff with clearer guidance on UN procurement regulations and policies, including more detailed step-by-step instructions of the procurement process than it had when we reported in 1999. However, the Secretariat has yet to enhance the openness of the procurement system by heeding a 1994 recommendation by outside experts that it establish an independent bid protest process. At present, vendors cannot file complaints with an independent official or office if they believe that the UN Procurement Service has not handled their bids appropriately. As a result, senior UN officials may not be made aware of problems in the procurement process. In addition, the United Nations has not fully addressed longstanding concerns affecting the professional qualifications of its procurement workforce. The Procurement Service is developing a new training curriculum and a professional certification program. It has trained some procurement staff as instructors but has yet to complete training curricula. In addition, the United Nations still needs to develop the individual training profiles necessary to determine the extent of its training requirements. A June 2005 report by an independent contractor indicated that most headquarters procurement officers and procurement assistants do not possess professional procurement certifications. According to the contractor, the level of certification was low in comparison to other organizations. Further, the United Nations has yet to adopt several internal proposals to clarify UN ethics regulations for procurement staff and vendors, such as a code of conduct.
The federal government acquires a wide variety of capital assets for its own use including land, structures, equipment, vehicles, and information technology. Large sums of taxpayer funds are spent on these assets, and their performance affects how well agencies achieve their missions. To directly acquire an asset, agencies generally are required to have full up- front BA for the total asset cost—usually a sizable amount. This requirement allows Congress to recognize the full budgetary impact of capital spending at the time a commitment is made; however, it also means that the full cost of an asset must be absorbed in the annual budget of an agency or program, despite the fact that benefits may accrue over many years. This up-front funding requirement has presented two challenges for capital planning and budgeting at the federal level. One challenge is how to permit “full cost” analysis and to promote more effective capital planning and budgeting by allocating capital costs on an annual basis to programs that use capital. Allocating capital costs over the assets’ useful lives ensures that the full annual cost of resources a program uses is considered when evaluating the program’s effectiveness. It can make program managers more aware of on-going capital costs, thus promoting more effective decision making for capital. It may also contribute to equalizing comparisons across different programs or different approaches to achieving similar goals. A second challenge is how to address the possible bias against the acquisition of necessary capital assets that may be created by spikes (large, temporary, year-to-year increases in BA), which can make capital assets seem prohibitively expensive in an era of resource constraints. GAO has reported in the past that agencies view up-front funding as an impediment to capital acquisition because of the resulting spike in BA. CAFs have been suggested as a capital asset financing approach that would benefit federal departments and their subunits by addressing both of these challenges. CAFs would be department-level funds that use annually appropriated authority to borrow from the Treasury to purchase federally- owned assets needed by subunits of the department. These subunits would then pay the CAF a mortgage payment sufficient to cover the principal and interest payment on the Treasury loan. The CAF would use those receipts only to repay Treasury and not to finance new assets. The CAF concept was formally proposed in the February 1999 Report of the President’s Commission to Study Capital Budgeting as a mechanism that would help improve the process by which annual budget decisions are made by promoting better planning and budgeting of capital expenditures for federally owned facilities. The report states that by ensuring that individual programs are charged the true cost of using capital assets, the CAF encourages managers to make more efficient use of those assets. The Commission report also argues that CAFs could help smooth out the spikes in BA experienced by subunits with capital project requests. By aggregating all up-front BA for capital requests at the department level, subunit budgets would reflect only an annual payment for capital. Since the Commission report, CBO, GAO, and the National Research Council (NRC) have all agreed that CAFs should be explored as a capital financing mechanism. CAFs were also discussed in the President’s Fiscal Year 2004 Budget issued in February 2003. The section on “Budget and Performance Integration” briefly described the concept and reports that draft legislation creating CAFs has been developed, discussed with agencies, and improved. It said that CAFs would be one way to show the uniform annual cost for the use of capital without changing the requirement for up-front appropriations. At this time, OMB’s interest in CAFs appears to have waned. CAFs were not mentioned in the President’s Budget in either fiscal year 2005 or 2006 and the CAF legislation described has not been introduced. To address our objectives, we reviewed the available literature describing the CAF concept. We also interviewed budget experts at OMB and CBO to gain a more thorough understanding of how CAFs would operate and discuss issues involved with their implementation. This permitted us to describe a theoretical CAF with some operational detail. Additionally, we sought the views of the many parties that would be affected if CAFs were established. Since agency and congressional officials were generally unaware of the CAF concept, we developed a brief summary describing the general mechanics of a CAF and shared that summary prior to interviews in order to generate discussion. To get the department perspective, we chose USDA and DOI as case studies. Both of these departments have substantial and varied capital needs. Capital assets acquired by USDA and DOI include land, buildings, research equipment, laboratories, quarantine facilities, dams, bridges, parklands, roads, trails, vehicles, aircraft, and information technology (hardware and software). In addition, each department has multiple subunits that use capital assets to achieve their missions—important for examining the question of subunit spikes. We interviewed officials at the department and subunit levels to gather their opinions and insights on the operation, benefits, and difficulties of CAFs. Specifically within USDA we spoke with officials in the Animal and Plant Health Inspection Service (APHIS), the Agricultural Research Service (ARS), and the Forest Service (FS). Within DOI, we spoke with officials in the National Park Service (NPS) and the Bureau of Land Management (BLM). During these discussions, agency officials also compared CAFs (as described in our summary) with current practices used for planning, budgeting, and acquisition of capital assets. Since congressional approval would be necessary for the creation and operation of CAFs, we spoke with staff on the House and Senate Budget Committees, the House and Senate Appropriations Subcommittees on the Interior, and the House Appropriations Subcommittee on Agriculture to get their opinions on the proposed CAF mechanism. We also interviewed officials at Treasury, which would be responsible for managing the borrowing authority. In addition, we spoke with officials at GSA to discuss how a CAF might affect the FBF, used by some federal agencies to acquire federal office space and the FTS, used by some federal agencies to acquire IT. Finally, we reviewed agency documents including asset management plans, accounting system descriptions, capitalization policies, and working capital fund information. We also examined our prior work, financial accounting standards, and various legal and budgetary sources specifically related to federal property management. We recognize that our findings on agency perspective, which are based on interviews with five subunits within two departments, may not be applicable to all agencies within the federal government. However, we were struck by the consistency in department and subunit reaction to the concept, especially when followed by comparable reactions from congressional officials. Our work was conducted in Washington, D.C., from May 2004 through January 2005 in accordance with generally accepted government auditing standards. Implementing CAFs would change the current process for financing new federal capital projects. In addition, if all existing capital assets of a department and its subunits were transferred to the CAF, the CAF would impute an annual capital usage charge on those assets to using agencies. This additional complication could be avoided if CAFs were limited to new assets. However, this would mean it would be decades before all programs showed the full annual cost of capital in their budgets. Although in many respects CAFs are accounting devices to record financial transactions, their creation would create new management and oversight responsibilities for many federal entities. Treasury would have primary responsibility for administering the borrowing authority. Both Treasury and those departments with CAFs would be required to keep track of the many CAF transactions. The management and oversight responsibilities of the departments would need to be clearly spelled out in order for CAFs to operate effectively. OMB would likely have to issue guidelines on operation specifics and OMB and the congressional appropriations committee staff would have to review the CAFs to ensure they were operating properly. OMB and CBO would score (estimate) the CAFs’ and subunits’ BA—both the initial authority to borrow and the subsequent appropriations used for repayment. The scoring of the annual capital usage charges, if CAFs were applied to existing capital, has not yet been developed. Although CAFs do not currently exist, we can describe how they would likely operate based on written proposals and our discussions with budget experts. CAFs would be established at the department level as separate accounts that would receive up-front authority to borrow (provided in appropriation acts) on a project-by-project basis, for the construction and acquisition of large capital projects for all of the subunits within a department. For those departments with subunits split between two appropriation subcommittees, it is likely that two CAFs would be necessary. For example, DOI receives appropriations through two subcommittees: the Energy and Water Development Subcommittee, which is responsible for Bureau of Reclamation (Reclamation) programs; and the Interior and Related Agencies Subcommittee, which is responsible for all other Interior programs. CBO, OMB, and agency officials we spoke with generally believed that having a CAF that crossed subcommittee jurisdictions would create many problems, thus it would likely be necessary for departments to have a separate CAF for each subcommittee with which they work. Using the example above, DOI would have one CAF for Reclamation and a second for the remaining subunits within DOI. Alternatively, CAFs could be situated at the appropriation subcommittee level rather than the department level, with each of the 13 subcommittees appropriating to their respective CAF for the agencies under their jurisdiction. Some congressional officials did not seem to think that this would be the most effective arrangement and raised the point that increased resources might be needed at the subcommittee level to manage CAF transactions. In addition, OMB argued that CAFs should be located at the department level because the department is the focus of accountability for planning and managing programs and capital assets, as well as for budget execution and financial reporting. The CAF would receive appropriations for the full cost of an asset (or useful segment of an asset) in the form of borrowing authority. Like all BA, the borrowing authority for each CAF-financed project would specify the purpose, amount, and duration of the authority. Unless the asset is to be available for use in the same fiscal year, the subunit itself would receive no appropriations. The CAF would use its authority to borrow from the Treasury’s general fund to acquire the asset for the subunit. When the asset became usable, the subunit would begin to pay the CAF an amount equal to a mortgage payment consisting of interest and principle. These equal annual payments would consist of the principal amortized over the useful life of the asset and include interest charges at a rate determined by Treasury (based on the average interest rate on marketable Treasury securities of comparable maturity). The CAF would use these mortgage payments to repay Treasury for the funds borrowed plus interest. Unlike a revolving fund, the mortgage payments collected by the CAF would be used only to repay Treasury and could not be used to finance new assets. For each project funded through the CAF, the subunit’s annual budget request would need to include the annual mortgage payment in each year, for the useful life of the asset (or until the asset was sold or transferred). The subunit would need annual appropriations for these payments, along with its other operating expenses. On the basis of our discussions, we conclude that the appropriations from which the payments are made would be discretionary as opposed to mandatory. They would not be provided as a line item for mortgage payments to the CAF, but would be part of the subunit’s total appropriation. While the subunit would be required to make the annual payment, there would be no guarantee that Congress would include the additional amounts to cover the payment in the subunit’s appropriation. At some point, the mortgage on an asset would be “paid off.” However, if annual capital usage charges on existing capital were established, payments would continue, although the amount of the payments would depend on the method used to calculate the charges for existing capital. Any imputed charges collected by the CAF would be transferred to the general fund of Treasury and not be available to finance new assets. Later in this report we discuss in more detail the idea of imputing a capital usage charge on existing capital. Treasury is responsible for administering and managing borrowing authority. Treasury officials explained that within the department, the Financial Management Service (FMS) would have responsibility for setting up the accounts to correspond with each CAF created. Before a CAF could actually borrow from Treasury, an agreement would have to be signed establishing the interest rate and repayment schedule. Treasury officials recommended that OMB establish guidelines to specify the useful life of capital assets so departments would abide by an appropriate amortization schedule and not attempt to lower payments by lengthening the asset’s useful life. The standards issued by the Federal Accounting Standards Advisory Board (FASAB) on how to account for property, equipment, and internal-use software could be useful in developing these guidelines. According to Treasury officials, FMS would also be responsible for preparing the warrants, an official document that establishes the amount of monies authorized to be withdrawn from the central accounts maintained by Treasury, and would report annually on account activity. The Bureau of Public Debt would have the most day-to-day interaction with the CAF. It would be responsible for transferring the borrowed funds to the department and for receiving payments. Although Treasury officials did not think it would be an unmanageable task, they said that tracking individual transactions could become complicated, depending on the level of detailed reporting required, and would certainly require additional staff time. To cover these costs, they would want to charge an administrative fee, as they do for trust funds. A CAF is an additional layer of administration that could complicate program management rather than streamline it. At the department level, the chief financial officer would likely be responsible for the financial operation of the CAF. Department heads would need to specify duties for those with capital asset management and oversight responsibilities according to the unique needs of the department. Oversight functions would include accounting for all the transactions between the CAF and Treasury as well as between the CAF and the subunits. In addition, the managerial relationship between the CAF and individual subunits would have to be worked out. OMB would also likely have new responsibilities. For example, OMB would probably have to develop guidelines on issues such as (1) the types of assets to include in the CAF, (2) the amortization schedule for various types of assets, (3) the method for calculating a capital usage charge on existing capital (along with CBO and Congress), and (4) the relationship between a CAF and FBF. Indeed, the NRC report argued that oversight and management of CAFs should actually reside at OMB. Although OMB officials provided no details, they agreed that they would have some responsibility for reviewing CAFs, as would congressional committees. As they do for all appropriation actions, CBO and OMB would score the CAF and subunit BA—both the initial authority to borrow and the subsequent appropriations used for repayment. Although the net amounts of BA and outlays for capital acquisitions would not change, the type of BA would. Currently, annual appropriations, which allow program managers to incur obligations and make outlays with no additional steps, are provided for most capital acquisitions. A CAF, however, would be appropriated up-front borrowing authority. On a gross basis, the BA would have to be appropriated twice, once as up-front borrowing authority and incrementally over time through appropriations for the annual mortgage payment. Since the annual mortgage payment is purely intragovernmental, the subunit’s BA and outlays are offset by receipts in the CAF, so the total BA and outlays are not double-counted. Therefore, appropriation subcommittee allocations would not need to be adjusted if a CAF were used for new assets. The initial borrowing authority would be equal to the asset cost and would be scored up front in the CAF budget. When the annual mortgage payments begin, the amount provided in the subunit’s budget would equal the mortgage payment and would be scored as discretionary BA. The mortgage payment would then be transferred to the CAF and, as a receipt, be considered mandatory BA. However, according to OMB, it would be treated as a discretionary offset for scoring purposes. The payment and receipt would completely offset each other within the appropriation subcommittees’ totals and in the BA and outlay totals for the federal budget as a whole. When the CAF repays Treasury using the mortgage receipts, scoring would follow the current guidelines for debt repayment transactions. The mortgage receipt would be considered mandatory BA and be used to repay Treasury; however, the portion of the mortgage payment that corresponds to the amortization of the asset cost would be deducted from the BA (and outlay) totals. When collections are used for debt repayment, they are unavailable for new obligations, and therefore are not BA. If they were counted, the BA and outlay totals would be overstated over the life of the loan. According to OMB, the remaining mandatory BA would be obligated and outlayed for interest payments to an intragovernmental receipt account in Treasury, but would not be scored. At this time, the scoring of annual capital usage charges on existing capital assets has not been determined. CAFs have been proposed as a way to address two challenges that arise from the full up-front funding requirement for capital projects. The first challenge is to facilitate program performance evaluation and promote more effective capital planning and budgeting by allocating capital costs on an annual basis to those programs using the capital. By having annual cost information, managers can better plan and budget for future asset maintenance and replacement. During our interviews, we learned that asset management and cost accounting systems are currently being implemented that could be used to address this problem. These systems are designed to provide the information necessary for improved priority setting and better decision making, although we found that many agencies are still working to fully implement and use these systems. The second challenge—managing periodic spikes in BA caused by capital asset needs—if considered a problem at all, is managed by our case study agencies through existing entities and practices, such as the use of WCFs. Consequently, CAFs appear to offer few benefits over and above those provided by other mechanisms being put into place or in use. In addition, officials at the department and subunit level and key congressional staff we spoke with have a number of concerns about adopting CAFs as an alternative financing method. Most of those we spoke with said CAFs sounded like a complicated mechanism to achieve benefits that can be achieved in simpler ways and some worried that implementation of CAFs could distract from current efforts to improve capital decision making. Officials we interviewed reacted to our presentation of the CAF mechanism by describing current agency initiatives and existing mechanisms that they believe can better achieve the ultimate goal of improving budgeting and decision making for capital. We found that some agencies currently make use of asset management plans to collect, track, and analyze cost information and to assist management in budget decisions and priority setting. Accounting systems that report full costs are also being developed that will include the cost of capital assets in total program costs and will provide a tool for agency managers to make better decisions and use capital more efficiently. Once fully implemented, these methods will provide agencies with the ability to assign costs at the program level and link those costs to a desired result. The information provided should lead agencies to consider whether they will continue to need the current quantities and types of fixed assets they own to meet future program needs. As we have reported in previous work, leading organizations gather and track information that helps them identify the gap between what they have and what they need to fulfill their goals and objectives. Routinely assessing the condition of assets and facilities allows managers and their decision makers to evaluate the capabilities of current assets, plan for future asset replacements, and calculate the cost of deferred maintenance. We found that asset management systems are being developed and implemented at some agencies as a mechanism to aid in the identification of asset holdings and prioritization of maintenance and improvements. For example, we reported in 2004 that NPS, within DOI, is currently implementing an asset management process. If it operates as planned, the agency will, for the first time, have a reliable inventory of its assets, a process for reporting on the condition of those assets, and a systemwide methodology for estimating deferred maintenance costs. The system requires each park to enter all of its assets and information on its condition into a centralized database for the entire park system and to conduct annual condition assessments and regular comprehensive assessments. This new process will not be fully implemented until fiscal year 2006 or 2007, and will require years of sustained commitment by NPS and other stakeholders. According to NPS documents, this approach and the information captured in the asset management plan provides Grand Canyon National Park managers with the knowledge and specifics to make informed capital investment decisions and to develop sound business cases for funding requests. The appropriators for NPS that we spoke with agreed that the additional funding they have provided for condition assessments and asset management has improved planning and decision making at NPS. Department officials told us that these types of asset management plans would eventually be completed for all capital-holding subunits within DOI. The completion of this management system is especially important for DOI because much of its mission is the upkeep and improvement of its capital for use by the public. FS, whose capital includes numerous trails, roads, and recreation facilities, has implemented and is continuing to enhance its asset management system referred to as Infrastructure (INFRA). INFRA has been in production since 1998 and served as the agency’s primary inventory reporting and portfolio management tool for all owned real property until May 2004. FS officials said that they have used INFRA to assist management in prioritizing backlogs of maintenance and renovations. According to these officials, INFRA allows for the transfer of FS asset inventory data directly into USDA’s asset inventory system known as the Corporate Property Automated Information System (CPAIS). CPAIS, which agency officials said was modeled after INFRA and further enhanced to include leased property and GSA assignments, was implemented in May 2004 and maintains data elements necessary to track and manage owned property, leased property, GSA assignments, and interagency agreements. The system will provide the department and its subunits with the capability to increase asset utilization and cost management and to analyze and reduce maintenance expenses. The primary users of the system are those subunits with considerable capital needs, according to agency officials. ARS’s capital is mostly high-priced laboratories, specific scientific equipment, and research facilities, and officials are confident that CPAIS will provide the information needed to ensure accountability over its real property. ARS also has its own facilities division made up of contractors and engineers that are equipped with the experience and expertise to manage and oversee their specialized capital projects. APHIS officials said they are in the process of doing facility condition assessments and hope to use the information in order to better align its mission with its strategic plan. The need for asset management systems to aid agency officials in making informed decisions was underscored in our report designating federal real property as a new high-risk area in 2003. The report highlighted the fact that in general, key decision makers lack reliable and useful data on real property assets. In February 2004, the President issued an Executive Order for Federal Real Property Asset Management. The order requires designated agencies to have a real property officer and to implement an asset management planning process. Its purpose is to promote the efficient and economical use of America’s real property assets and to assure management accountability for implementing federal real property management reforms. We found that some agencies are currently implementing cost accounting methods, such as activity-based costing (ABC), to help determine the full cost of a product or service, including the annual cost of capital, and using that information to make budgeting decisions. For example, BLM has implemented a management framework that integrates ABC and performance information. We have previously reported that BLM’s model fully distributes costs and can readily identify, among other things, (1) the full costs of each of its activities and (2) what it costs to pursue each of its strategic goals. The system provides detailed information that facilitates external reporting and can be used for internal purposes, such as developing budgets and analyzing the unit costs of activities and outputs. Integrating cost and performance information into one system helped BLM become a finalist for the President’s Quality Award in 2002 in the “performance and budget integration” category. The bureau was recognized for implementing a disciplined approach that allows it to align resources, outputs, and organizational goals, and can lead to insights to reengineer work processes as necessary. Among the results of its ABC efforts, BLM has reported increased efficiency and success in completing deferred maintenance and infrastructure improvement projects. BLM was at the forefront of this cost management effort, which began in 1997 and has now been adopted departmentwide as part of DOI’s vision of effective program management. In another report, we described how the National Aeronautics and Space Administration (NASA) is beginning to use accounting information to help make decisions about capital assets. NASA’s “Full Cost” Initiative involves changes to accounting, budgeting, and management to enhance cost-effective mission performance by providing complete cost information for more fully informed decision making and management. The accounting changes allow NASA to show the full cost of related projects and supporting activities while the “full cost” budgeting uses budget restructuring to better align resources with its strategic plan. The accounting and budgeting portions of the initiative support the management decision-making process by providing not only better information, but also incentives to make decisions on the most efficient use of resources. For example, NASA officials credited “full cost” budgeting with helping to identify underutilized facilities, such as service pools—the infrastructure capabilities that support multiple programs and projects. NASA’s service pools include wind tunnels, information technology, and fabrication services. If programs do not cover a service pool’s costs, NASA officials said that it raises questions about whether that capability is needed. NASA officials also explained that when program managers are responsible for paying service pool costs associated with their program, program managers have an incentive to consider their use and whether lower cost alternatives exist. As a result, NASA officials said “full cost” budgeting provides officials and program managers a greater incentive to improve the management of these institutional assets. Although accounting changes alone are not sufficient to improve decision making and management, it is clear from discussions with NASA officials and agency documentation that the move to full costing is a critical piece of the initiative. Some agencies still need to make more progress before their cost accounting can more fully inform their decision making, including decisions on capital planning and budgeting. In a 2003 report looking at the financial management systems of 19 federal departments, we found that although departments are required to produce information on the full cost of programs and projects, some of the information is not detailed enough to allow them to evaluate programs and activities on their full costs and merits. For example, the Department of Defense (DOD) does not have the systems and processes in place to capture the required cost information from the hundreds of millions of transactions it processes each year. Lacking complete and accurate overall life-cycle cost information for weapons systems impairs DOD’s and congressional decision makers’ ability to make fully informed decisions about which weapons, or how many, to buy. DOD has acknowledged that the lack of a cost accounting system is its largest impediment to controlling and managing weapon systems costs. Our report states that departments are experimenting with methods of accumulating and assigning costs to obtain the managerial cost information needed to enhance programs, improve processes, establish fees, develop budgets, prepare financial reports, make competitive sourcing decisions, and report on performance. As departments implement and upgrade their financial management systems, opportunities exist for developing cost management information as an integral part of the systems to provide important information that is timely, reliable, and useful. The President’s Commission to Study Capital Budgeting and NRC have suggested that a CAF might help ameliorate the spikes in agency budgets that often result from large periodic capital requests by smoothing capital costs over time and across subunits. Our analysis of recent trends in BA for capital acquisitions clearly shows the presence of spikes at the subunit level. See figure 3 for an illustration of budget spikes and potential smoothing effects of a CAF at ARS. However, these spikes did not appear to be a major concern to the case study subunits we spoke with nor did they consider them a barrier in meeting capital needs. Given current practices for financing capital assets, it seems that some program managers and Congress have found ways to cope with spikes in the absence of CAFs. As a result, the benefit of smoothing costs with a CAF would be minimal. Our prior work indicates that some agencies have complained that large spikes in their budget hinder their ability to acquire the needed funding to complete capital projects and reveals that some agencies have turned to alternative financing mechanisms, such as incremental funding, operating leases, and public-private partnerships, that allow them to obtain assets without full, up-front BA. A few agency officials we spoke with said that because of the up-front funding requirement, they have sometimes opted for operating leases instead of capital leases or constructing buildings. Operating leases are generally more expensive than construction, purchase, or capital leases for long-term needs but do not have to be funded up front. Nevertheless, the agencies we spoke with reported that spikes are often created by the changing priorities of Congress and its willingness to provide up-front funding for favored capital projects. For example, ARS officials reported that appropriators have increased the agency’s budget in a given year to fund a new or expanded facility that the subcommittee considered a priority. Historically, the appropriations subcommittee for ARS (and all USDA agencies except FS) has been active in initiating capital projects and following through with the up-front funding necessary to build or acquire assets. From ARS’s perspective, budget spikes are not problematic because of the perceived ease in obtaining needed funds. DOI also reported that some of its subunits have received “waves” of funding for capital projects largely dependent upon the priorities of Congress and the President. Within DOI, BLM officials agreed that budget spikes were mostly a result of congressional add-ons. On the other hand, NPS reported that most of its capital projects are just not large enough to cause a noticeable budget spike. Staff from the congressional budget committee suggested that deliberations during the appropriations process result in some smoothing at the subcommittee level. The smoothing effects may not be apparent to agencies when they review their individual budgets, but they are evident from a governmentwide perspective. Historical analysis shows that federal nondefense capital spending has remained relatively constant over the past 30 years. When spikes might be a problem, the departments and subunits we spoke with have been able to manage them by dividing projects into useful segments and accumulating funds with no-year authority. USDA and FS reported that they have broken capital projects into useful segments and requested the funding accordingly to minimize dramatic fluctuations in capital costs. For example, USDA is currently renovating its headquarters in Washington, D.C., and is using funds the department receives every other year to finance the overhaul of one discrete section of the building at a time. APHIS and BLM have also broken up large projects by funding the survey and design phases in the first year and requesting funds for construction in subsequent years. In addition, ARS and APHIS have authorities that allow them to accumulate a specified amount or percentage of unobligated funds until the amount is sufficient to cover the full up-front costs of the desired asset. For example, ARS is building an animal health center in Iowa, which costs an estimated $460 million. ARS received $124 million in fiscal year 2004 towards the project and can accumulate that money in its no-year account until the total amount to cover the costs is collected. In its efforts to consolidate field offices, APHIS officials told us they were granted authority to convert $2 million in unobligated balances into no-year money each year for 3 years. The $6 million it was able to accumulate allowed it to fund the consolidation with up-front funding. The bureau hopes to expand this authority to apply to other capital, including helicopters and airplanes. WCFs, a type of revolving fund, are a mechanism that can be used both to spread the cost of capital acquisition over time and to incorporate capital costs into operating budgets. As reported previously, we found that WCFs can be effective for agencies with relatively small, ongoing capital needs because the WCFs, through user charges, spread the cost of capital over time in order to build reserves for acquiring new or replacement assets. Also, WCFs help to ensure that capital costs are allocated to programs that use capital by promoting full cost accounting. Since WCFs are designed to be self-financing, the user charges must be sufficient to recoup the full cost of operations and include charges, such as depreciation, to help fund capital replacement. Some we spoke with use WCFs to finance capital assets such as IT initiatives and equipment. For example, USDA’s WCF provided funds to the National Finance Center, one of its activity centers, to purchase and implement a financial system. Department officials explained that after the system became operational, the Finance Center charged the 28 user entities a depreciation expense to recoup the costs of purchasing the system so it could repay the WCF. In another example, the FS’s WCF purchases radio equipment, aircraft, IT, and other motor-driven equipment. The equipment is rented out to administrative entities within the agency, such as the National Forests and Research Experiment Stations, and to outside agencies for a charge that recoups the costs of operation, maintenance, and depreciation. The user charge is adjusted to include sufficient funds to replace the equipment. Agency officials would like to expand the WCF beyond just equipment and establish a facilities maintenance fund. Through this fund, they would apply a standard charge per square foot plus a replacement cost component. The charges would be used for ongoing maintenance and replacement and they believe would help influence line officers to reexamine capital needs. BLM’s WCF functions similar to that of the FS’s WCF. BLM’s WCF purchases vehicles, then charges fees to users of the vehicles and uses the revenue to buy replacement vehicles. In both of these examples, the WCF is designed to accumulate the funds to absorb the up-front costs of the capital while the user entities incur the annual costs of using the capital. This mechanism operates similarly to a CAF, but with more flexibility in the funding requirements. First, since WCFs are revolving funds, they allow agencies to purchase new capital without a specific congressional appropriation whereas a CAF would require a new appropriation to purchase new capital. Second, WCFs are not subject to fiscal year limitations (they have no-year authority) while CAFs would have project-by-project borrowing authority specified in appropriation acts. Third, WCFs reflect annual capital costs through a depreciation charge whereas CAFs would reflect this cost through an annual mortgage payment of principal and interest. Hence, both would reflect the annual cost of capital in the subunits’ budgets. To obtain federal office space, many agencies lease from and make rental payments to GSA, which deposits those funds into the FBF. Although leasing is recognized as being more expensive in the long run than ownership, some agencies lease because it does not require as much up-front funding as ownership (i.e., to avoid spikes). Although a CAF is conceptualized to reduce the amount of up-front funding needed by subunits when acquiring capital assets (while still requiring up-front funding at the department level), it is not clear that having a CAF would encourage subunits to build rather than lease office space. Two agency officials we spoke with said that they would likely continue leasing and one commented that if planning outright ownership, it would be easier to deal with obtaining the traditional up-front funding than worry about the annual mortgage payments required by a CAF. Through their charges, both WCFs and FBF spread the cost of capital over time and ensure that capital costs are properly allocated to the user programs. Agency officials, congressional staff, and other key players raised numerous concerns about CAFs. For example, department and subunit officials are concerned that there is no guarantee or assurance that the annual mortgage payments to be collected by the CAF will be adequately funded in annual subunit appropriations. In addition, some subunits and appropriators are reluctant to shift more control for capital planning and budgeting to the department level. Congressional staff also raised concerns about the feasibility of the congressional mind shift that would be required to fund capital through a mechanism such as a CAF, especially if a charge on existing capital is included, and questioned the value that a CAF would really add to agency planning and budget decision making that could not be obtained through other means. CBO and GSA were also apprehensive and cautious about the usefulness of the CAF concept when operating details were described in full. Most budget experts and agency officials we spoke with agreed that the complexities involved in operating a CAF would likely outweigh the possible benefits. A few worried that CAFs might even divert attention from the current initiatives under way to improve asset management and full costing. Treasury, which would assume responsibility for collecting debt repayments, was concerned that there would be no guarantee that future appropriations would finance the mortgage payments, nor would there be any enforcement mechanism by which Treasury could enforce repayment. Treasury officials feared that over time other types of spending would take priority over debt repayment. They based their concerns on the record of some other programs that have struggled to repay debt or for which debt has been “forgiven” or otherwise excused. For example, the Black Lung Disability Trust Fund, which provides disability benefits and medical services to eligible workers in the coal mining industry, has growing debt and will never become solvent under current conditions. Although Black Lung Disability Fund revenues are now sufficient to cover current benefits, they do not cover either repayment of the over $8 billion owed the Treasury or interest on that debt. Another example is the Bonneville Power Administration (BPA), which is a federal electric power marketing agency in the Pacific Northwest with authority to borrow from Treasury on a permanent, indefinite basis in amounts not exceeding $4.45 billion at any time. BPA finances its operations with power revenues and the loans from Treasury, and has authority to reduce its debt using “fish credits.” This crediting mechanism, authorized by Congress in 1980, allows BPA to reduce its payments to Treasury by an amount equal to mitigation measures funded on behalf of nonpower purposes, such as fish mitigation efforts in the Columbia and Snake River systems. BPA took this credit for the first time in 1995 and has taken it every year since that time. The annual credit allowed varies, but has ranged between about $25 million and $583 million, including the use in 2001 and 2003 of about $325 million total unused “fish credits” that had accumulated since 1980. Some officials at the department and subunit level also raised concerns about the long-run feasibility of fulfilling their mortgage payments over the entire repayment period given that the payments are made from their annual appropriations, which they expect to become increasingly constrained. The mortgage payments would be relatively uncontrollable items within an agency’s budget, to the detriment of other, more controllable items, such as personnel costs. Because the mortgage costs would not change unless the asset is sold, managers would have less flexibility in making budgeting decisions within stagnant or possibly declining annual budgets that occur in times of fiscal restraint. BLM officials said this type of fixed obligation, which could consume an increasing share of its budget, could hinder its ability to address emergency needs that arise during the year. For example, they cited a case in which the agency reprogrammed resources to deal with a landslide that occurred on the Oregon coast in late 2003. BLM delayed other projects in order to redirect funds for the removal and stabilization of the landslide and to reopen Galice Creek Road, which is a major artery for public access, recreation, and commercial activity such as timbering, as well as BLM and FS administration. BLM officials questioned whether the fixed payment to the CAF would constrain their ability to make adjustments such as this. Many agency officials were skeptical of the idea that they could fulfill annual mortgage payments to a CAF without squeezing program operations and some said they would rather deal with the up-front funding requirement than have to worry about annual mortgage payments. The alternative to force-fitting a mortgage payment within agencies’ annual appropriations is to adjust agency budgets with an automatic add-on equal to agencies’ mortgage payments. While this would relieve budget pressures at the agency level, it would probably not provide incentives or influence managers to improve capital asset management and decision making. Under the CAF concept, requests for capital projects would come from the department level and the CAF would own all capital assets. This would shift more control of capital planning and decision making from the subunit to the department level. Some agencies and one appropriation subcommittee staffer said they would not favor this shift. Several agencies feel that they have the expertise and experience to better assess their own capital needs, which are often mission specific. For example, ARS’s capital consists of mostly scientific equipment, laboratories, and research facilities designed for conducting agricultural research in various climates. In fact, the agency has its own facilities division consisting of contractors and engineers who are involved in the management and oversight of capital projects. Similarly, APHIS’s facilities are mission specific. BLM’s use of activity-based costing allows it to assign capital costs to the program level and track those costs to desired outputs. Consequently, the bureau has a more intimate understanding of its capital needs and how capital contributes to carrying out its mission. One agency raised the point that departmental management might force bureaus to share facilities or later decide to use an asset for purposes other than those originally intended. While some of these departmental decisions might be beneficial, some agencies were skeptical of departmental decision making. The officials we interviewed stated that there are important problems in capital budgeting that CAFs do not address. Before the smoothing effects of a CAF can be realized in the out years, the department must still receive full up-front funding to begin new capital projects or acquire new assets. And as noted above, some agency officials stated that the annual mortgage payments may be even more of a dilemma than the up-front funding requirement. Since a CAF assumes up-front funding, some agencies may still seek to use some of the alternative financing mechanisms that they already use, such as operating leases or enhanced-use leases, to meet capital needs without first having to secure sufficient appropriations to cover the full cost of the asset. As currently envisioned, CAFs would probably not help improve capital planning concerns, such as the need for improved budgeting and management of asset life-cycle costs. According to the NRC report, operation and maintenance costs are typically 60 to 85 percent of the total life-cycle costs of a facility while design and construction typically account for only 5 to 10 percent of those costs. For example, agencies must properly determine the funds needed for increasing staff in new and expanded facilities in order to avoid staffing shortages. Almost everyone we spoke with agreed that CAFs sounded complicated and many questioned whether the challenges in budgeting for capital that CAFs were designed to address were great enough to warrant CAFs as a solution. Congressional budget committee and appropriations subcommittee staff agreed that CAFs might be beneficial in theory but were probably not worth the additional budget complexity they would create. Budget committee staff considered the proposed benefits of a CAF to be abstract and uncertain coupled with a sizeable likelihood for repayment problems in the out years. In addition, they saw no obvious dilemma prompting the need for CAFs. While this capital financing approach may be appealing in theory since it promotes strategic planning and broadened, forward-looking perspectives, budget practitioners cautioned the adoption of an approach involving such layers of complexity in the absence of a clearly stated, agreed-upon problem that the new approach is expected to address. Further, they saw a need for agencies to complete their implementation of capital asset management and cost accounting systems, which can help achieve some of the same benefits that CAFs were meant to achieve. A good asset management system including inventories and asset condition would likely be a necessary precursor to successfully implementing CAFs. All of these factors weaken the case for CAFs as an improved approach to current capital financing practices. While in theory CAFs could be implemented at most agencies, there are several complex issues that Congress would need to consider before adopting such a mechanism. For example, proposals to apply CAFs to existing capital would require the development of a formula to calculate an annual capital usage charge, which is likely to be a difficult and contentious undertaking. Key players including OMB, CBO, and Congress would need to work together to develop an agreed-upon method to estimate an appropriate capital usage charge for various types of assets. And even if the full cost of programs, including the cost of existing capital, was more accurately reflected in the budget through the use of CAFs, incentives to cut capital costs may not materialize except in times of severe budget cuts. Even then, managers’ abilities to eliminate unneeded capital assets would probably be limited given mission responsibilities and legal requirements that dictate the disposal of surplus federal property. To remedy this, additional funding or agency flexibilities would be needed, as would provisions to ensure debt repayment if CAF-financed assets were transferred or sold. Additionally, it is likely that some capital projects for federal office space, IT, and land would continue to be financed outside of the CAF through mechanisms such as the FBF, WCFs, or the GSA IT Fund. There are arguments that the CAF concept be applied to existing capital assets as well as new capital assets to ensure that the full costs of all programs are reflected in the budget. OMB points out that if CAFs were not applied to all capital, it would be many decades before programs reflected full annual costs and before the cost of alternative inputs could be compared. Developing an annual capital usage charge for existing assets would establish a level playing field for federal capital investment and allow for comparisons across programs. In addition, this new charge could influence agency managers to get rid of excess capital assets. Accomplishing these goals would require developing a standard method of computing an appropriate annual capital usage charge. Subunits would pay these charges to the department’s CAF using appropriated funds, which would then be transferred to Treasury’s general fund. In other words, agencies would receive appropriations to pay for the use of capital assets they already own and would not retain any of the funds to maintain or replace assets. Imputing such a charge on existing capital is likely to be difficult and very contentious given questions about how to estimate the charge and the fact that the assets were already funded. Before imputing an annual capital usage charge, key players, including OMB and Congress, would need to agree on some type of standard formula to estimate the charge. Three possible approaches to compute annual capital usage charges would be to (1) use historical cost for the asset by applying a charge as though the original cost had been financed by borrowing from Treasury, (2) use market rental rates, or (3) devise a calculation incorporating asset replacement cost, depreciation rates, and interest rates. There are arguments for and against each of these options. For example, while using historical cost would make the charge congruent with accounting data; the charge would not reflect the current cost of using capital and so might be less meaningful for evaluating costs. Although using market rates would theoretically be the right measure for comparing the cost of using resources for federal versus private purposes, the fact that many government assets fill unique purposes means there is not a measure of market value for them. For example, some agencies occupy historic buildings, such as the Old Executive Office Building, for which a comparable market-based value would be difficult to determine. The third approach might be considered an agreeable middle ground, but applying depreciation rates poses problems since they are largely arbitrary. Agreement on whether to apply Treasury or market interest rates would be necessary. Some agency officials and congressional staff suggested that any charges on existing capital should reflect the life-cycle costs of maintaining assets and, similar to a WCF, receipts collected should be made available for future maintenance and renovation costs. We have reported that repair and maintenance backlogs in federal facilities are significant and that the challenges of addressing facility deterioration are prevalent at major real property-holding agencies. However, research and discussions on CAF design indicate that CAF receipts could only go to Treasury and not for future projects. Officials were also skeptical about how to accurately charge for highly specialized capital. For example, ARS has more than 100 laboratories located in various regions of the country, as well as abroad, which are designed to carry out mission responsibilities ranging from the study of crop production to human nutrition to animal disease control. The highly technical and diverse nature of its objectives requires capital assets that are suitable for varied climates, soils, and other agricultural factors, which pose unique and difficult challenges in establishing capital usage charges that would be viewed as acceptable by agency officials. If key players were able to agree on the method for calculating usage charges on existing capital assets, they would also have to examine the budgetary effects of such charges. Budget scorekeepers—OMB, CBO, and the budget committees—would need to develop additional scoring rules to clarify how the usage charges would be treated in the budget. Unlike charges on new capital, there is no corresponding debt to repay. As a result scorekeepers would have to specify how to score the usage charges as they are transferred from the CAF to Treasury. Although these charges would not change agency or government outlays or the deficit, they could require a permanent increase in agencies’ total BA, which would require Congress to consider adjustments of appropriations subcommittee allocations. Oversight would be especially important for these transactions since CAF collections would be greater than needed to repay Treasury loans, creating a temptation to use accruing balances for other purposes. Similar questions about how to charge for and how to score capital usage charges for existing assets would eventually pertain to new capital funded through the CAF. Once an asset is fully “paid off” through the CAF, it is comparable to existing capital and would similarly incur an annual capital usage charge. Some might argue that payments should continue in the same amounts as before, while others may call for the calculation of a new capital usage charge for “paid off” assets based upon the formula used for capital that existed before the creation of CAFs. In any case, numerous decisions on capital usage charges for existing capital would need to be made prior to implementing CAFs. Aside from the specifics of how to develop appropriate capital usage charges, most agency officials and congressional staff with whom we spoke were skeptical of the need for such a charge. Many said that the cost of maintaining capital assets—which is reflected in agency budgets—and depreciation expenses—which are reflected in agency accounting systems along with asset maintenance costs—sufficiently represent the cost of existing capital assets and help inform managers. As discussed earlier, asset management systems and full cost accounting approaches are also beginning to provide the information managers need to make better decisions about the maintenance or disposal of existing assets and the need for new capital. Some congressional staff thought the mind shift required for Congress to agree to impute this new charge on existing capital assets would be even more difficult than that required for purchasing new capital using borrowing authority. In the countries of New Zealand, Australia, and the United Kingdom, charges on existing capital are being used to encourage the efficient use of assets. These charges, similar to interest charges, are generally used to reflect the opportunity cost of capital invested. In New Zealand, departments are appropriated a capital charge based on their asset base at the beginning of the year; at the end of the year they must pay the government a capital charge based on their year-end asset base. If a department has a smaller asset base at the end of the year than the asset base for which the appropriation was made, the department is permitted to keep part of the appropriation made for the capital charge. This spurred the New Zealand Department of Education to sell a number of vacant sites that it had acquired in the 1960s but that were no longer needed. However, officials in New Zealand’s Office of Controller and Auditor General were uncertain about the effectiveness of having a charge for capital in changing behavior significantly. In addition, some analysts in New Zealand expressed concern that capital charging could drive department executives to decisions that are rational in the short term but damaging in the long term. For example, an audit official suggested that a department might have an incentive to try to operate with obsolete and fully depreciated assets in order to avoid a higher capital charge. Although one goal of CAFs is to ensure the allocation of full costs to programs in the budget and thereby encourage managers to make more informed decisions about capital assets, additional incentives to evaluate new or existing asset needs are unlikely to be created except during times of severe budget cuts or downsizing. For new assets funded through the CAF, the mortgage payments made out of the subunits appropriations would be equal to those received by the CAF and thus the payments would offset each other within the department budget and at the appropriations subcommittee level and would not affect the deficit. Although the information on total program costs might be made more transparent, it is not clear that this would create stronger incentives for more careful deliberation on future asset needs than having these costs shown through available methods such as cost accounting systems or the use of working capital funds. A charge on existing assets might also have limited impact. If appropriation subcommittee allocations were simply raised to accommodate new capital usage charges, programs would appear more expensive but perhaps not differentially so. As with new assets, the capital charge on existing assets would not affect the deficit. As a result, incentives for rationalizing existing capital would not necessarily be created. Even during tight budget years, when mandatory CAF payments would squeeze operating budgets and be most likely to force trade-offs among capital assets, managers may be constrained by mission responsibilities, legal requirements, or the cost of disposing of assets. Consequently, agencies might have to argue for increased funding or case-by-case exemptions, which Congress has granted in the past. Some agencies questioned the effectiveness of applying a charge to influence managers’ decision making given the unique locations or types of assets required to accomplish mission goals. BLM officials said an annual capital usage charge would have a limited impact on their ability to dispose of capital assets because of its stewardship role over the nation’s public lands. Similarly, ARS officials justified having locations dispersed all over the country because its research activities are diverse and require facilities in various climates and environments. As discussed, Congress also plays a role in determining where ARS will conduct its research. Likewise, many of APHIS’s capital assets are mission specific, including animal quarantine stations, sterile insect-rearing facilities, and laboratories, and typically do not have a comparable counterpart in the commercial sector. APHIS officials said this limits managers’ abilities to sell or transfer assets because the land often must be converted to original condition, a costly undertaking. For some subunits we spoke with, destruction of certain assets, which also has an up-front cost, is the only viable option for eliminating unneeded assets. For example, NPS and FS have many facilities located on public land. If no longer needed, some of these facilities cannot be sold or transferred and would have to be demolished. According to FS officials, when they determine that an asset has exhausted its useful life and needs to be disposed of, the agency will incur the cost for removal and recover the salvage value. Many agencies are subject to certain legal requirements that create disincentives for disposing of surplus property. In these cases, agencies would need additional funding or more flexibility to modify asset holdings if improved decision making were to be realized. For example, under the National Environmental Policy Act, agencies may need to assess the environmental impact of their decisions to dispose of property. In general, agencies are responsible for environmental cleanup of properties contaminated with hazardous substances prior to disposal, which can involve years of study and amount to considerable costs. Agencies that own properties with historic designations—which is common in the federal portfolio and certainly within the inventories of USDA and DOI—are required under the National Historic Preservation Act to ensure that historic preservation is factored into how the property is eventually used. The Stewart B. McKinney Homeless Assistance Act, as amended, sets forth a requirement that consideration be given to making surplus federal property, including buildings and land, available for use by states, local governments, and nonprofit agencies to assist homeless people. If none of these restrictions apply and an agency is able to sell an asset, most cannot retain the proceeds from the sale of unneeded property even up to the cost of disposal. However, Congress has granted special authorities in some cases. For example, FS officials told us it owned a number of trails and roads on public lands that ran through the city of Los Angeles, California. When the city expanded, it was no longer feasible to maintain the roads and trails. As a result, the agency was granted authority to sell the land and use the proceeds to build a new ranger station. We have said that agencies be allowed to retain enough of the proceeds from an asset sale to recoup the cost of disposal, and that in some cases it may make sense to permit agencies to retain additional proceeds for reinvestment in real property where a need exists. Provisions would also need to be established to ensure the full repayment of CAF debts in the event that an agency sells or transfers a capital asset before it reaches the end of its useful life (the repayment period). Two possible options would be to (1) transfer the outstanding debt to a new “owner” agency of the asset or (2) allow the “seller” agency to sell the asset and use the proceeds from the sale to repay the outstanding CAF debt. Both of these options would produce complications and issues to resolve. For example, transferring the asset would require all parties involved, including Treasury, to record adjustments to their CAF accounting systems and oblige subunits to adjust their budget requests accordingly. After the transfer, it is not clear whether the “seller” agency’s budget would be reduced by an amount equal to the asset’s mortgage payment. However, if that was done, it would lessen or eliminate the incentive for the “seller” agency to sell or transfer the asset. If the asset was sold instead of transferred, an appropriate “sale price” would need to be determined as well as the appropriate disposition of the sale proceeds. For example, if the asset was sold for an amount that is greater than the outstanding CAF debt, the Treasury general fund would receive full repayment on the asset plus excess revenue. On the other hand, if an asset was sold for an amount less than the outstanding debt, the CAF would default on the loan unless additional receipts for debt repayment were appropriated. Finally, some subunits may argue to refinance their mortgage if a lower Treasury interest rate became available and lower payments would result. Again, before CAFs are implemented, proposals on how to handle such circumstances would need to be addressed. The CAF’s scope of coverage would need to be addressed by any CAF proposal. Capital assets are generally defined as land, structures, equipment and intellectual property (such as software) that are used by the federal government and have estimated useful lives of 2 years or more. However, departments have some discretion in defining capital. The Commission report suggested that OMB issue guidance on which capital items belong in the CAF to ensure uniform implementation of the CAF proposal. Alternatively, each department could use its current department guidelines and definitions to determine which capital to fund through the CAF. Whatever parameters are put in place, some capital assets would likely continue to be funded outside the CAF through existing mechanisms. For example, for federal office space, the Commission and NRC reports state that agencies would generally continue to lease space from GSA and pay rent to FBF. FBF, a governmentwide revolving fund, is used to acquire office buildings and the space is then rented out to federal agencies. Most agencies are not allowed to lease their own office space unless GSA delegates its authority to do so to that agency, which GSA has done in the past. Under the CAF mechanism, if GSA were to delegate this authority, the CAF would lease the office space. The NRC report recommends that agencies should use their CAF for office space acquisition only if it could be done more effectively and efficiently than through GSA. GSA would negotiate the acquisition of space for multiple agencies that seek to collocate in a single facility. Agencies also have the option to purchase IT through FTS and its IT Fund. For a fee, FTS provides expertise and assistance in acquiring and managing IT products. Those agencies that chose to use this service may argue for continuing to finance these projects outside of the CAF so that they are not paying a fee to FTS as well as interest on the borrowed funds. Some officials also questioned the effectiveness of using borrowing authority to finance IT purchases when their useful life is typically no more than 10 years and is often 5 years or less, thus indicating that officials may argue to fund some IT projects outside the CAF. Departments and subunits would also likely continue to rent certain capital assets from WCFs or to use their WCFs to purchase some capital. As discussed, WCFs rely on user charges to fund ongoing maintenance and replacement of capital assets and the collections are used by some departments and subunits to finance capital assets, such as vehicles and IT. Land, such as wilderness areas, is also likely to remain outside the CAF. Land retains its value so concepts such as depreciation and amortization do not apply to it. However, one subunit official stated that using borrowing authority to buy land might be beneficial if it meant that land could be purchased at a faster rate to obtain environmentally sensitive land before it is damaged. There is little doubt that in the mechanical sense CAFs could work as a new system for financing capital assets. However, the implementation and operation of the CAF concept would be complicated. Managing the extra layer of responsibilities for CAF administration and oversight would require the devotion of resources within departments, subunits, and Treasury and to a lesser extent, OMB, CBO, and Congress. Accounting for CAF transactions would be complex and burdensome. The annual debt repayment would be a source of concern for Treasury and agency officials, especially as more assets were financed through the CAF and mortgage payments became a larger percentage of agency appropriations. Beyond the complexities inherent in financing capital assets using borrowing authority is a list of difficult issues that would have to be resolved before benefits could be realized. The most difficult of these issues, applying a capital usage charge to existing capital, would also be the most important to address if annual capital costs were to be allocated to program budgets. If CAFs were applied only to new assets going forward, programs would not reflect the full annual cost of capital for decades and programs purchasing new capital would appear more expensive than those using existing capital. Even if this and other issues were tackled and improved information about capital costs was provided to managers, there is little assurance that CAFs alone would create incentives for programs to reassess their use of capital. Even in times of severe budget constraints, it is probable that managerial flexibility to adjust the amount of assets used by a program would continue to be limited by agency missions, legal restrictions, and limited funds for asset disposal. Given the execution complexities and implementation concerns, the ensuing question seems to be whether there are simpler methods that can be used to achieve the same benefits as CAFs. We believe there is strong evidence that both benefits attributed to CAFs could be more easily obtained through existing mechanisms. Asset management and cost accounting systems, when fully implemented, will be important tools for promoting more effective planning and budgeting for capital. Cost accounting systems can provide the same information on capital costs as CAFs are intended to provide, while the information provided by asset management systems could be even more crucial for helping managers with limited budgets prioritize capital asset maintenance and replacement. For existing capital, incentives to rationalize assets might be created if agencies were allowed to retain proceeds to recoup the cost of disposal, or in some cases, for reinvestment in real property. While some of our case study agencies did not view spikes as a problem, those that did felt they were managing them well through the use of WCFs, no-year authority, and acquiring assets through useful segments. In any case, spikes in spending for capital assets are likely to continue as congressional and presidential priorities change over time. When described in detail to executive branch and congressional officials, we learned that the CAF proposal would likely have few proponents. Almost everyone we consulted concluded that implementation issues would overwhelm the potential benefits of a CAF. More importantly, current efforts under way in agencies would achieve the same goals as a CAF without introducing the difficulties. Given this, as long as alternative efforts uphold the principle of up-front funding, then a CAF mechanism does not seem to be worth the complexity and implementation challenges that it would create. We obtained comments on a draft of this report from OMB, Treasury, GSA and our case study agencies—USDA and DOI. Treasury, GSA, USDA and DOI generally agreed with the report. Treasury, USDA, DOI and OMB provided technical comments, which have been incorporated as appropriate. OMB agreed with our description of the mechanics of CAFs and concurred that spikes in BA for capital assets could be alleviated through other means. OMB also acknowledged the problems with CAFs that are highlighted in this report, including those related to existing capital, and agreed that the complications of designing and operating CAFs might outweigh the benefits. However, they disagreed with our description of the primary goal of CAFs and therefore do not believe alternative mechanisms achieve the same goal. OMB supports having program budgets reflect full annual budgetary costs in order to change incentives for decision makers. In addition to proposing to budget for accruing retirement benefit costs, OMB has suggested budgeting for accruing hazardous waste clean-up costs and budgeting for capital through CAFs. Budgeting for full annual budgetary costs should facilitate decision makers’ ability to compare total resources used with results achieved across government programs. For capital, OMB has suggested CAFs as a possible method to allocate and embed the cost of capital assets at the program budget level. OMB recognizes the usefulness of asset management and cost accounting systems regardless of whether CAFs are adopted. It is OMB’s opinion that these tools do not ensure that the costs of capital are captured in individual program budgets and therefore do not affect incentives for decision makers in allocating resources among and within programs. We disagree on several points. We recognize that if the sole or primary purpose of a CAF is to embed costs in the program budgets, then the alternatives discussed in this report do not achieve that purpose. However we believe, as highlighted in the Report of the President’s Commission to Study Capital Budgeting, that the primary goal of CAFs is to improve decision making for capital. We are not convinced that CAFs and the annual mortgage payments they would require would achieve this more effectively than other mechanisms. We argue instead that the information provided by asset management and cost accounting systems, when fully implemented, could assist decision makers in efficiently allocating budgetary resources. While this information may not necessarily be reflected in program budgets, it is available to aid in budget and program decision making. The fact that many of these systems are in relatively early stages of development also increases our concern about CAFs. In a recent report, we noted the belief among some agency officials, congressional appropriations committee staff, and budget experts that improving underlying financial and performance information should be a prerequisite to efforts to restructure program budgets. We argue this would also be true for CAFs, since without adequate measures of program costs and an ability to identify capital priorities, a new financing mechanism would do nothing to address the basic challenges of determining how much and what types of capital are needed. It is also unclear that CAFs would create new incentives as OMB argues. As we describe in the section titled “Cost Allocation Efforts May Have Limited Effect on Agency Decision Making,” if the annual mortgage payments offset each other within the department budget and at the appropriations subcommittee level, the deficit would not be affected, and it is unlikely incentives would be changed. Even during tight budget years, when CAF payments would squeeze operating budgets, managers may be unable to change the amount of capital assets they use because of mission responsibilities, legal requirements, or the cost of disposing of assets. We also recognize the value of linking resources to results in comparing programs; however, it is unclear that CAFs are necessary or would even work to accomplish this. Institutionalizing CAFs could permit program comparison, but fair evaluations would only be possible if existing capital were included. Therefore, the difficult issue of including existing capital would have to be addressed. Alternatively, we believe that cost accounting systems, when well developed within and across agencies, provide a similar opportunity for comparing programs. In conclusion, we remain of the view that the operational challenges of CAFs outweigh the benefits and that alternative mechanisms described in this report can more simply accomplish the goals of CAFs. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its issuance date. At that time we will send copies of this report to the Director of the Office of Management and Budget, the Administrator of the General Services Administration, the Secretary of the Department of the Interior, the Secretary of the Department of Agriculture, and the Secretary of the Department of the Treasury. We will also make copies available to others upon request. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding the information in this report, please contact me at (202) 512-9142 or Christine Bonham at (202) 512-9576. Key contributors to this report were Jennifer A. Ashford, Leah Q. Nash, and Seema V. Dargar. We believe that the discussion of BPA’s use of “fish credits” is an appropriate example for the section on agencies’ repayment of their borrowing from Treasury. Although these credits were provided by Congress, their use for offsetting payments on Treasury debt has been controversial and opposed by some members of Congress and other interested parties. However, we have made technical changes to the section based on Treasury’s comments. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
CAFs have been discussed as a new mechanism for financing federal capital assets. As envisioned, CAFs would have two goals. First, CAFs would potentially improve decision making by reflecting the annual cost for the use of capital in program budgets. Second, they would help ameliorate at the subunit level the effect of large increases in budget authority for capital projects (i.e., spikes), without forfeiting congressional controls requiring the full cost of capital assets to be provided up-front. Through discussions with budget experts and by working with two case studies, the Departments of Agriculture and of the Interior, we are able to describe in this report (1) how CAFs would likely operate, (2) the potential benefits and difficulties of CAFs, including alternative mechanisms for obtaining the benefits, and (3) several issues to weigh when considering implementation of CAFs. Capital acquisition funds (CAF) have been suggested as department-level funds that would use appropriated up-front borrowing authority to buy new departmental subunit assets. These subunits would then pay the CAF a mortgage payment sufficient to cover the principal and interest payment on the Treasury loan. The CAF would use those receipts only to repay Treasury and not to finance new assets. If existing capital assets were transferred to the CAF, subunits would pay an annual capital usage charge to the CAF. CAFs might achieve the goals intended, but these goals can be achieved through simpler means. Alternative mechanisms, such as asset management systems, cost accounting systems, and working capital funds may achieve the goal of allocating annual capital costs and improving decision making for capital assets. Our case study agencies generally did not indicate problems with budget authority spikes. They budget in useful segments, use accumulated no-year authority, or finance capital assets using working capital funds. Many concerns about CAFs were raised, including the long-term feasibility of making fixed annual mortgage payments and the added complexity CAFs would create. Implementation would raise a number of issues. If CAFs were applied only to new assets going forward, all programs would not reflect the full annual cost of capital for decades. Yet the difficulties of including existing capital are numerous. Even if these issues were tackled, there is little assurance that CAFs alone would create new incentives for programs to reassess their use of capital since CAF payments would not affect the deficit. Implementation issues could overwhelm the potential benefits of a CAF. More importantly, current efforts under way in agencies would reflect asset costs as part of program costs without introducing the difficulties of a CAF. As long as alternative efforts uphold the principle of up-front funding, CAFs do not seem to be worth the implementation challenges they would create. Except for OMB, agencies generally agreed with our conclusions.
In 1968, Congress created NFIP to address the increasing cost of federal disaster assistance by providing flood insurance to property owners in flood-prone areas, where such insurance was either not available or prohibitively expensive. The 1968 law also authorized premium subsidies to encourage property owner participation. To participate in the program, communities must adopt and agree to enforce floodplain management regulations to reduce future flood damage. In exchange, federally backed flood insurance is offered to residents in those communities. NFIP was subsequently modified by various amendments to strengthen certain aspects of the program. The Flood Disaster Protection Act of 1973 made the purchase of flood insurance mandatory for properties in special flood hazard areas—areas that are at high risk for flooding—that are security for loans from federally regulated lenders and located in NFIP participating communities. This requirement expanded the overall number of insured properties, including those that qualified for subsidized premiums. The National Flood Insurance Reform Act of 1994 expanded the purchase requirement for federally backed mortgages on properties located in special flood hazard areas. Congress has passed two key pieces of legislation designed to reform NFIP. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert- Waters Act), enacted in July 2012, instituted provisions to help strengthen NFIP’s future financial solvency and administrative efficiency. For example, it required FEMA to phase out almost all discounted insurance premiums and establish a reserve fund. As implementation proceeded, however, affected communities raised concerns about some Biggert- Waters Act requirements. The Homeowner Flood Insurance Affordability Act of 2014 was enacted in March 2014 and sought largely to address affordability concerns by repealing or altering some Biggert-Waters Act requirements. In this context, HFIAA included provisions that FEMA produce a study and submit a report that assess and recommend options, methods, and strategies for making voluntary CBFI available through NFIP. According to FEMA, as of March 2016 there were more than 5 million NFIP policies in force, which were spread across more than 22,000 communities throughout the United States and its territories (see figs. 1 and 2). FEMA defines a community as a political entity that has the authority to adopt and enforce floodplain ordinances for the area under its jurisdiction which is—in most cases—an incorporated city, town, township, borough, village, or an unincorporated area of a county or parish. While FEMA could potentially modify this definition to include other types of communities for a CBFI option, using the number of communities currently participating in NFIP provides one measure of how many communities could hypothetically pursue a CBFI option. In 2007, we identified four broad policy goals for federal involvement in natural catastrophe insurance: (1) charging premium rates that fully reflect actual risks; (2) encouraging private markets to provide natural catastrophe insurance; (3) encouraging broad participation in natural catastrophe insurance programs; and (4) limiting costs to taxpayers before and after a disaster. We identified these goals by drawing insights from a variety of sources: past GAO work, legislative histories of laws that changed the roles of state governments and the federal government after disasters, bills considered by Congress, interviews with public and private sector officials, and articles written by experts in insurance economics. We believe that the four goals we identified accurately capture the essential concerns of the federal government. We have previously used these policy goals to evaluate potential changes to NFIP. We believe that they are still relevant and could also be used to evaluate a CBFI option. We determined that the CBFI study objectives and methodology agreed upon by FEMA and NAS were reasonable. The objectives were clearly stated and designed to provide relevant, high-level considerations that could help FEMA decide whether CBFI should be implemented before determining how it should be implemented, which FEMA officials said was an important step. The methodology was consistent with the study’s objectives and was reasonable given the limitations identified by FEMA: short time frames, the need to obtain informed opinions, and unavailable data. As a result, the study included findings that helped FEMA draw conclusions about whether and how to implement CBFI, if it chose to do so. FEMA and NAS took several steps to design the objectives and methodology of the CBFI study. FEMA initially provided NAS with documents that included background information on the purpose of the study and proposed a methodology and objectives. Then, NAS officials responded with minor modifications that reflected what NAS could do to meet FEMA’s needs. FEMA and NAS officials told us that they held two meetings and communicated via email before agreeing to the methodology, objectives, and time frames (see fig. 3). According to FEMA and NAS officials, FEMA and NAS agreed that the study’s objectives were to examine future prospects for CBFI by identifying and discussing issues that would require further evaluation in order for FEMA and others to better evaluate CBFI’s strengths and weaknesses. According to FEMA officials, the agreed-upon objectives met their needs. To execute the CBFI study, NAS convened an expert committee to produce a report using a consensus report process. The committee was composed of 12 members that represented academia, the private sector, and state and federal government. The committee met twice: the January 2015 meeting featured guest presentations, and the March 2015 meeting was convened in a workshop-type format with multiple panels of external speakers. Following these meetings, the NAS study director, committee chair, and other committee members developed sections of the study and all committee members signed off on the full study. Prior to publication, NAS conducted an independent external peer review by experts who NAS recruited based on their technical expertise and breadth of perspectives. NAS and FEMA officials explained that they decided to use the consensus report process because it offered several benefits and did not face the same limitations that other methodologies would have faced. According to NAS officials, because consensus reports modulate different perspectives and identify new concepts and approaches, this approach was appropriate to meet the study’s objectives. FEMA officials explained that they requested a consensus report methodology because it would provide independent, creative, and unbiased ideas from experts who already had a deep contextual knowledge of NFIP. Further, FEMA officials said that alternative approaches they considered would have faced various limitations. They stated that community-based surveys or listening sessions, for example, would have taken more time and might not have reached audiences that could provide well-informed opinions. Data analysis was not considered a viable option because FEMA does not have the data it would need to determine hypothetical costs for participating communities, homeowners, and FEMA. For example, as we have previously reported, FEMA does not collect all information necessary to determine flood risk, which is a key factor for determining NFIP premiums for individual policyholders or, potentially, communities. We determined that FEMA’s study provided relevant information for evaluating the advisability of implementing CBFI. The study identified and discussed seven main areas for further consideration in designing a CBFI option for potential implementation. Risk bearing and sharing—A CBFI option could conceivably shift risk- bearing to communities, private insurers, or individuals depending on how it is structured. Responsibilities for writing policies and loss adjustments— Participating insurance agents write policies and collect premiums under NFIP, but CBFI policies could be written at the community level. Coverage limits, standards, and compliance—Movement to a CBFI policy option could provide a community with an opportunity to reconsider flood exposures, such as extending the exposure beyond individual homes to public infrastructure. Underwriting, pricing, and allocation of premium costs—Several complex considerations could fall under this topic, including methods used in setting premiums, the extent to which catastrophic losses would be reflected, and the allocation of premium costs among homeowners. Administrative capabilities—Communities would likely not have expertise for undertaking the administration of CBFI policies, but depending on how a community is defined, it could have the means for collecting funds to pay for CBFI premiums. Confirming compliance with mandatory purchase requirements—A community-based policy might have to maintain some aspect of individual property coverage in order to satisfy mandatory flood insurance purchase requirements, which could be administratively burdensome. Pricing expertise, including valuation of mitigation measures—Pricing of CBFI policies would need to account for the risk underwritten and the savings expected from mitigation measures. The study also provided other relevant information related to CBFI, such as discussion of potential benefits and challenges, a conceptual rationale for identifying cases in which CBFI may or may not be a good option, and considerations for defining “community” in the context of CBFI. For example, the study reports that benefits could include reduced administrative and transaction costs, increased take-up rates, and promotion of mitigation efforts and floodplain management. The study also reports that challenges could include lack of community interest; limited implementation capability; likely need to create many approaches given variation in population size, geography, and authority to regulate land use and collect revenue; and political obstacles. Additionally, the study developed a conceptual rationale for identifying cases in which CBFI may or may not be desirable, and also identified several factors that could make CBFI more or less desirable than individual flood insurance policies. In addition, the study notes that FEMA may want to consider broadening the current definition of “community” because it may be too narrow in the context of CBFI. Specifically, the study noted that while a town or city would clearly be considered as a community under FEMA’s current definition, the status of areas such as business districts or gated communities would be unclear and, as a result, it is difficult to say whether or not these kinds of areas would be eligible to purchase CBFI policies. To further assess FEMA’s study, we evaluated relevant elements of its findings against the four public policy goals for federal involvement in natural catastrophe insurance that we have previously identified. The public policy goals, along with examples of elements from the study, follow. 1. Charging premium rates that fully reflect actual risks. Our prior work has shown that charging rates that do not fully reflect actual risks makes it difficult for FEMA to maintain the financial stability of NFIP, sends policyholders inaccurate price signals about their chances of incurring losses, and reduces incentives for policyholders to undertake mitigation efforts. FEMA’s study identified areas for further consideration, including several related to charging premium rates that fully reflect risks; responsibility for bearing and sharing risk; responsibility for writing policies and loss adjustments; underwriting, pricing, and allocation of premium costs; and pricing expertise, including valuation of mitigation measures. The study also outlined concepts for innovative uses of CBFI that could help NFIP charge full- risk rates, such as requiring communities interested in CBFI to provide comprehensive analysis of flood risk in their communities, which would enhance NFIP’s knowledge of flood risk at the community level. 2. Encouraging private markets to provide natural catastrophe insurance. Our prior work has shown that covering flood-related losses through NFIP involves significant federal expense, and Congress has shown interest in reducing the federal government’s role in flood insurance by transferring its exposure to the private sector. While the private sector has not historically been willing to participate in the flood insurance market, facilitating certain conditions could help increase private sector involvement. FEMA’s study discusses ways in which CBFI could encourage or discourage private market participation in flood insurance markets. For example, the study suggests that CBFI could encourage private sector involvement if NFIP offered it in either of two forms. First, CBFI could be offered as a way to insure all properties at a base level of coverage, providing the private sector with an opportunity to offer supplementary insurance above this base level of coverage. Second, CBFI could be offered as coverage for residual risk properties—the highest-risk properties that the private sector would not want to insure—allowing the private sector to insure all other properties. However, the study also suggests several challenges that may outweigh these potential benefits. For example, private insurers would need to acquire the information and expertise to price CBFI policies in line with actuarial standards and would have difficulty diversifying their risk pool—issues we have previously identified as key private sector concerns about offering flood insurance. Additionally, CBFI may discourage private sector participation because servicing CBFI policies after a disaster could be difficult, as the damage would be concentrated in a set geographic area. 3. Encouraging broad participation in natural catastrophe insurance programs. Our prior work has shown that nationwide flood insurance penetration rates are estimated to be low and information on compliance with the mandatory purchase requirement is limited. The study provided some discussion of how potential benefits and challenges of CBFI could relate to increasing NFIP participation. For example, the study notes that CBFI could help increase take-up rates and encourage compliance with mandatory purchase requirements, but it also noted that there may be challenges related to requiring all community members to participate in CBFI and some communities would not be able to oversee compliance with mandatory purchase requirements (unless that responsibility remained with the lender). 4. Limiting costs to taxpayers before and after a disaster. Our prior work has shown that the losses already generated by NFIP, as well as the potential for future losses, have created substantial financial exposure for the federal government and NFIP likely will not generate sufficient premium revenue to repay the billions of dollars borrowed from Treasury. The study provided some discussion of how CBFI could potentially limit costs to taxpayers by encouraging communities to pursue mitigation options that reduce risk of flood damage and result in lower premiums, or by potentially reducing administrative and transaction costs. However, the study also said that—given administrative costs required to design and set up the program— these cost savings could be limited if few communities decide to enroll. Finally, FEMA officials said that the CBFI study’s findings met their needs. According to the officials, the study’s broad discussion of areas for further research as well as potential benefits and challenges of CBFI informed their opinion on the prospects for CBFI. The FEMA report, which included the study, provides a brief set of related conclusions from FEMA, including that FEMA should not implement CBFI. FEMA officials explained that, after reviewing the results of the study, they concluded that FEMA should not conduct further related research or implement CBFI for several reasons: The challenges outlined in the study—such as administrative burden on communities and FEMA, unavailable data, unclear cost distribution among community members, and potential legislative requirements— are significant and likely outweigh the benefits. A limited number of communities would be likely to participate in a CBFI option, as only one community expressed positive interest in CBFI after FEMA presented the idea during previous NFIP reform listening sessions. Dedicating FEMA’s resources to other potential NFIP reforms and strengthening existing programs would be a better use of these resources. Based on the factors cited by FEMA officials and our prior work, we agree that FEMA’s conclusion that it was not advisable to implement CBFI at this time was reasonable. We have previously reported on how some of the challenges that the CBFI study outlined apply to NFIP more broadly. For example, we have previously reported that FEMA’s resource constraints may jeopardize potential NFIP reform efforts. In 2008, we reported that implementing a combined federal flood and wind program could make addressing existing management challenges even more difficult, given the resources that would be required to administer and oversee a new program. In 2015 we reported that while FEMA had begun taking some actions to improve its capacity to administer NFIP, it was unclear whether FEMA had the resources required to complete its efforts to implement both the Biggert-Waters Act and HFIAA reforms. Also, starting in 2013, we have stated in multiple reports that FEMA does not collect some data necessary to determine flood risk—a key factor for determining NFIP premiums for individual policyholders or, potentially, communities. In addition, we have reported that some communities may face challenges administering NFIP and flood mitigation programs. In 2013, we reported that communities such as Indian tribes may lack the resources and administrative capacity needed to administer NFIP requirements, and in 2014 we reported that experts claim that some communities—especially rural ones—may lack the expertise and administrative capabilities to apply for and administer grants for mitigation activities. We provided a draft of this report to FEMA for its review. FEMA did not provide formal comments, and did not have any technical comments. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix I. Alicia Puente Cackley, (202) 512-8678 or cackleya@gao.gov. In addition to the contact named above, Patrick Ward (Assistant Director); Chloe Brown and Tarik Carter (Analysts-in-Charge); Namita Bhatia- Sabharwal; and Jennifer Schwartz made key contributions to this report. Also contributing to this report were David Dornisch; Jessica Sandler; and Jena Sinkfield.
Floods are the most common and destructive natural disaster in the United States. The National Flood Insurance Program, which FEMA administers, has struggled financially to both pay for flood losses and keep rates affordable. To address these and other challenges, Congress has passed legislation including the Homeowners Flood Insurance Affordability Act of 2014 (HFIAA). HFIAA included provisions for FEMA to conduct a study and submit a report that assesses and recommends options, methods, and strategies for making CBFI available through NFIP. FEMA presented the study and related report to Congress in March 2016. HFIAA also includes a provision for GAO to review the FEMA report, which is to include the study and submit a related report to Congress. To review the study and report FEMA submitted to Congress, GAO analyzed the FEMA study's objectives, methodology, and findings, and evaluated the FEMA report's conclusions. GAO analyzed the appropriateness of the objectives, the reasonableness of the methodology, and the extent to which the conclusions were supported by the findings. GAO also evaluated relevant study findings against public policy goals for federal involvement in catastrophe insurance previously identified by GAO, and interviewed FEMA and NAS officials. GAO is not making recommendations in this report. The Federal Emergency Management Agency (FEMA) used reasonable objectives and methodology for its study on community-based flood insurance (CBFI)—flood insurance that a community would purchase to cover all properties located within it. FEMA contracted with the National Academy of Sciences (NAS) to conduct the study, and worked with NAS to design a study that would provide a high-level, independent discussion of issues related to CBFI. According to FEMA officials, such a study would help FEMA decide whether CBFI should be implemented, an important step before developing any plans to implement CBFI. Once the study was designed, NAS conducted the study independently, using input obtained from an expert committee that met twice in early 2015. The members of the committee represented academia, the private sector, and state and federal government. GAO determined that the study objectives were designed to meet FEMA's needs, the methodology supported the objectives, and alternative methodologies that were considered would have faced various limitations. In its report submitted to Congress, which contained the study, FEMA concluded that it should not conduct further related research or implement CBFI. Specifically, it concluded that the challenges outlined in the study outweigh any potential benefit when considered against limited community interest. FEMA officials further cited the need to dedicate FEMA's resources to effective National Flood Insurance Program (NFIP) reform. Based on the factors cited by FEMA officials, and the consistency of these factors with findings in prior GAO reports on NFIP, GAO determined that FEMA's conclusion was reasonable. For example, prior GAO work has highlighted challenges FEMA faces in balancing reform efforts with limited resources. In prior work, GAO identified four public policy goals for federal involvement in natural catastrophe insurance and used them to evaluate changes to NFIP. While the FEMA study did not use these goals, as part of its assessment GAO evaluated relevant elements of the study against them. For example, one of these goals is charging premium rates that fully reflect actual risk, and the study discussed innovative uses of CBFI that could help NFIP charge such rates. Another goal is encouraging private markets to provide natural catastrophe insurance, and the study discusses ways in which CBFI could encourage private market participation in flood insurance markets, as well as challenges that CBFI would pose to private insurers.
Interior oversees and manages the nation’s publicly owned natural resources, including parks, wildlife habitat, and crude oil and natural gas resources on over 500 million acres onshore and in the waters of the Outer Continental Shelf (OCS). In this capacity, Interior is authorized to lease federal fossil and renewable energy resources and to collect the royalties associated with their production. These substantial revenues are disbursed to 38 States, 41 Indian Tribes, Interior’s Office of Trust Funds Management on behalf of some 30,000 individual Indian royalty owners, and to U.S. Treasury accounts. Royalties paid for fossil and renewable resources extracted from leased lands represent the principal source of the $12.6 billion in revenues managed by MMS—$10.7 billion, more than 85 percent of revenues received in fiscal year 2006. Of these, oil and natural gas leases are the most significant component of royalties, composing on average nearly 90 percent of the royalties received over the past five years. For oil and gas, production royalties are paid either in value or in kind. The OCS Lands Act of 1953, as amended, and the Mineral Leasing Act of 1920, as amended, authorize the collection of production royalties either in value or in kind for federal lands leased for development onshore and on the OCS. Furthermore, according to MMS, the terms of virtually all federal oil and gas leases provide for royalties to be paid in value or in kind at the discretion of the lessor. The Energy Policy Act of 2005 provides additional statutory requirements to support the operation and funding of a program for managing federal oil and gas royalties in kind. Additionally, MMS also collects revenue generated by exploration and development of geothermal energy resources commonly used to generate electricity. Until recently, the Geothermal Steam Act of 1970, as amended, directed MMS to disburse royalties collected from geothermal energy development such that 50 percent of geothermal royalties be retained by the federal government and the other 50 percent be disbursed to the states in which the federal leases are located. A provision of the Energy Policy Act of 2005 changed the distribution of the royalties collected from geothermal resources. While 50 percent of federal geothermal royalties must still be disbursed to the states in which the federal leases are located, an additional 25 percent must be disbursed to the counties in which the leases are located, leaving only 25 percent to the federal government. As Assistant Secretary Allred of Interior recently testified before the Congress, the absence of price thresholds in leases issued in 1998 and 1999 has already cost the government almost $1 billion and MMS has estimated a range of potential future foregone revenue for these leases of between $6.4 billion and $9.8 billion. MMS calculated these estimates under a range of assumptions about oil and natural gas prices and future production levels. We reviewed MMS’s assumptions and methodology for estimating the potential foregone revenue from 1998 and 1999 leases and found them to be reasonable. However, because there is considerable uncertainty about future oil and natural gas prices and production levels, actual foregone royalties could end up being higher or lower than MMS’s estimates. MMS is currently negotiating with oil and gas companies to apply price thresholds to future production from the 1998 and 1999 leases. If successful, this approach would partially undo the omission of price thresholds for future production, thereby implementing the royalty relief as though price thresholds had been included in the leases. However, the results of the negotiation have been mixed so far—as of late February 2007, only 6 of 45 companies have agreed to terms, and a current legal challenge to Interior’s authority to set price thresholds on any DWRRA leases may further deter or complicate a negotiated settlement. In addition to forgone royalty revenues from leases issued in 1998 and 1999, royalty revenues on leases issued under DWRRA in 1996, 1997, and 2000 are also threatened pending the outcome of a legal challenge regarding price thresholds. Specifically, Kerr-McGee filed suit against the Department of the Interior in early 2006, challenging its authority to place price thresholds on any of the leases issued under the DWRRA. In effect, this suit seeks to remove price thresholds from the leases in question. In June 2006, Kerr-McGee agreed to enter into mediation with Interior in an attempt to resolve the issue; however, the mediation was unsuccessful and litigation has resumed. As of March 2007, the leases in question have generated approximately $1 billion in royalties. If the government loses this legal challenge, it may be required to refund these royalties—perhaps with interest penalties—and to forego any future royalties on these leases, and perhaps any lease issued during 1996, 1997, and 2000. As a result, the government could stand to lose billions of additional dollars. We reviewed the RIK pilot program for this committee in two separate reports in 2003 and 2004 and found that MMS did not collect the necessary information to effectively monitor and evaluate the program. This information includes the administrative costs of the RIK program and the revenue impacts of all sales. We found that MMS lacked this information largely because it had not developed information systems to rapidly and efficiently collect this information. We made several recommendations in our 2003 and 2004 reports to address the shortcomings we identified. Specifically, to further the development of management controls for MMS’s RIK program, we recommended that the Secretary of the Interior instruct the appropriate managers within MMS to identify and acquire key information needed to monitor and evaluate performance prior to expanding the RIK program. We specified that such information should include the revenue impacts of all RIK sales, administrative costs of the RIK program, and expected savings in auditing revenues. We also recommended that MMS clarify the RIK program’s strategic objectives to explicitly state that the goals of RIK include obtaining fair market value and collecting at least as much revenue as MMS would have collected in cash royalty payments. MMS agreed with both recommendations and has taken several steps to address these shortcomings. We acknowledge the agency’s efforts and, within the context of the program’s scope at the time of our report, consider our recommendations implemented by the agency. However, the expansion of use of RIK since our last review raises an additional concern. The RIK program has actively expanded the scope of its operations as MMS has increasingly opted to take royalties in kind rather than in cash. As MMS reported in its September 2006 Report to Congress, today’s RIK operation manages a significant portfolio of the nation’s oil and gas royalty assets collected primarily from federal leases in the Gulf of Mexico. This portfolio has expanded more than three-fold from 1999 to present—some 82 million barrels of oil equivalent were exchanged in kind in fiscal year 2005—and is expected to continue to grow for the foreseeable future. The Energy Policy Act of 2005 permanently established an RIK operation with administrative and business costs to be paid from royalty revenues generated by RIK sales, effectively transitioning the program from pilot status to a steady-state business operation and potentially enabling a further expansion of the RIK program. The Act restricts the use of RIK to those situations where the benefit is determined to equal or exceed the benefit from royalties in value prior to the sale. However, the larger scale of the RIK program at present makes it unclear that MMS can effectively and accurately make this determination going forward. Noting this issue, we are undertaking work for the Congress. Specifically, we have several ongoing reviews assessing, among other things, MMS’s ability to quantify and compare administrative costs and revenues of the RIK and royalties in value programs; the effectiveness of the systems used to collect, account for, and disburse royalties; and the accuracy of royalty revenue collection, including evaluating whether the value of RIK payments equal or exceed the value of royalties that would have been received in value for oil and gas as required by statute. In a 2006 report on geothermal royalties, we found that MMS had erroneous and missing historical geothermal electricity revenue data and did not collect sufficient data from royalty payors to accurately asses whether MMS was collecting the amount of royalties required by statute. Specifically, about 40 percent of the royalty revenue data for royalty payors was either missing or erroneous in the projects we reviewed. In addition, MMS did not have sufficient historical gross revenue data for geothermal electricity sales. MMS is charged with collecting and distributing royalties collected from the development of geothermal resources used to generate electricity. The Energy Policy Act of 2005 included provisions that significantly changed how geothermal royalties are calculated but also instructed the Secretary of the Interior to seek to maintain the same level of royalties over the next ten years that would have been collected prior to the Act’s passage. We found that to meet the statutory requirements, MMS will need to calculate the percentage of gross sales revenues that lessees will pay in future royalties from electricity sales and compare this to what lessees would have paid prior to the Act. In order to compare royalties collected under the provisions of the Act with what would have been collected under the old system would require historical data on gross revenues from geothermal electricity sales as well as accurate royalty data on those sales. As a result of the insufficient gross revenue data and missing or erroneous royalty revenue data, MMS is unable to determine if it is collecting the amount of royalties on geothermal electricity production as required in statute. In our report we recommended that the Secretary of the Interior direct MMS to correct these deficiencies and the agency agreed with our findings and recommendations. We will continue to monitor the agency’s efforts to address these shortcomings. As seen by all the attention royalties management has received in the Congress and the media, Interior’s performance in managing this effort is a cause for concern. Billions of dollars have been lost already and potentially billions more are at risk. In a time of dire long-term national fiscal challenges it is urgent that this problem be fixed and the confidence of the American public that the sale of its national resources is generating a fair return be restored. Our work on this issue is continuing on multiple levels, including comparing the value of royalties taken in kind to the value of royalties taken as cash, reviewing the diligence of resource development, and evaluating the accuracy of the agency’s cost, revenue, and production data. We look forward to this continued work, and to helping this committee and the Congress as a whole exercise oversight of this important issue. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For further information about this testimony, please contact me, Mark Gaffigan, at 202-512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Frank Rusco, Assistant Director; Robert Baney; Ron Belak; Philip Farah; Doreen Feldman; Glenn Fischer; Dan Haas; Chase Huntley; Dawn Shorey; Barbara Timmerman; Maria Vargas; and Jacqueline Wade. Oil and Gas Royalties: Royalty Relief Will Likely Cost the Government Billions, but the Final Costs Have Yet to Be Determined, GAO-07-369T (Washington, D.C.: Jan. 18, 2007). Suggested Areas for Oversight for the 110th Congress, GAO-07-235R (Washington, D.C.: Nov. 17, 2006). Department of Interior: Royalty-in-Kind Oil and Gas Preferences, B-307767 (Washington, D.C.: Nov. 13, 2006). Royalty Revenues: Total Revenues Have Not Increased at the Same Pace as Rising Natural Gas Prices due to Decreasing Production Sold, GAO-06-786BR (Washington, D.C.: June 21, 2006). Renewable Energy: Increased Geothermal Development Will Depend on Overcoming Many Challenges, GAO-06-629 (Washington, D.C.: May 24, 2006). Mineral Revenues: Cost and Revenue Information Needed to Compare Different Approaches for Collecting Federal Oil and Gas Royalties, GAO-04-448 (Washington, D.C.: Apr. 16, 2004). Mineral Revenues: A More Systematic Evaluation of the Royalty-in-Kind Pilots is Needed, GAO-03-296 (Washington, D.C.: Jan. 9, 2003). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of the Interior's Minerals Management Service (MMS) is charged with collecting and administering royalties paid by companies developing fossil and renewable energy resources on federal lands and within federal waters. To promote development of oil and natural gas, fossil resources vital to meeting the nation's energy needs, the federal government at times has provided "royalty relief" waiving or reducing the royalties that companies must pay. In these cases, relief is typically applicable only if prices remain below certain threshold levels. Oil and gas royalties can be taken at MMS's discretion either "in value" as cash or "in kind" as a share of the product itself. Additionally, MMS also collects royalties on the development of geothermal energy resources--a renewable source of heat and electricity--on federal lands. This statement provides (1) an update of our work regarding the fiscal impacts of royalty relief for leases issued under the Deep Water Royalty Relief Act of 1995; (2) a description of our recent work on the administration of the royalties in kind program, as well as ongoing work on related issues; and (3) information on the challenges to collecting geothermal royalties identified in our recent work. To address these issues we relied on recent GAO reports on oil, gas, and geothermal royalty collection systems. We are also reviewing key MMS estimates and data. The absence of price thresholds in oil and gas leases issued by MMS in 1998 and 1999 has already cost the government about $1 billion and the agency has recently estimated that future foregone royalties would be $6.4 billion to $9.8 billion over the lives of the leases. Precise estimates of the actual foregone royalties, however, are not possible at this time because future projections are sensitive to price and production levels, both of which are subject to change. MMS is currently negotiating with oil and gas companies to apply price thresholds to future production from these leases, with mixed results--only 6 of the 45 companies involved have agreed to terms. Moreover, a pending legal challenge to Interior's authority to include price thresholds on any leases issued under the Deep Water Royalty Relief Act could, if successful, cost the government billions more in refunded and foregone revenue. In our most recent review of the royalty in kind (RIK) program, conducted in 2004, we found that MMS was unable to determine whether the revenues received from its sales of oil taken in kind were equivalent to receiving royalties in value, largely because it had not developed systems to rapidly and efficiently collect this information. We made recommendations that the agency has implemented that have improved the administration of the program as it existed at the time of our report. However, the continued expansion of the program raises a new question about the adequacy of the agency's overall management practices and internal controls to meet the increasing demands placed on the RIK program. Accordingly, we are undertaking follow-on reviews assessing, among other things, the agency's ability to quantify and compare administrative costs and revenues of the RIK and royalties in value programs and the extent to which the revenues collected under the RIK program are equal to or greater than what would have been received had they been taken in value. In a 2006 report on geothermal royalties, we found that missing and erroneous historical data, as well as insufficient data on electricity sales, meant that MMS is unable to accurately determine whether it was collecting royalties as directed by statute. The Energy Policy Act of 2005 included provisions that significantly changed how geothermal royalties are calculated but also directed Interior to maintain the same level of royalties over the next ten years that would have been collected prior to the Act's passage. We found that making this determination requires historical data on sales of electricity produced from geothermal resources as well as accurate royalty data. However, MMS did not have sufficient historical gross revenue data with which to establish a baseline for past royalties paid as a percentage of electricity revenues. Further, about 40 percent of MMS's royalty data was either missing or erroneous for the projects we reviewed. We recommended that MMS correct these deficiencies and the agency agreed. We are continuing to monitor the agency's efforts.
USDA is, by law, charged with leading and coordinating federal rural development assistance. USDA Rural Development administers the greatest number of development programs for rural communities and directs the highest average amount of federal program funds to these communities. Most of Rural Development’s programs and activities provide assistance in the form of loans, loan guarantees, and grants. Three offices are primarily responsible for carrying out this mission: the Rural Business-Cooperative Service (RBS), Rural Housing Service (RHS), and Rural Utilities Service (RUS). RBS administers programs that provide financial, business planning, and technical assistance to rural businesses and cooperatives that receive Rural Development financial assistance. It also helps fund projects that create or preserve jobs in rural areas. RHS administers community facilities and housing programs that help finance new or improved housing for moderate-, low-, and very low-income families. RUS administers electric, telecommunications, and water programs that help finance the infrastructure necessary to improve the quality of life and promote economic development in rural areas. Since its beginning in 1953, SBA has steadily increased the amount of total assistance it provides to small businesses, including those in rural areas, and expanded its array of programs for these businesses. SBA’s programs now include business and disaster relief loans, loan guarantees, investment capital, contract procurement and management assistance, and specialized outreach. SBA’s loan programs include its 7(a) loan guarantee program, which guarantees loans made by commercial lenders to small businesses for working capital and other general business purposes, and its 504 loan program, which provides long-term, fixed-rate financing for major fixed assets, such as land and buildings. These loans are generally provided through participating lenders, up to a maximum loan amount of $2 million. SBA also administers the Small Business Investment Company (SBIC) program—a program that provides long-term loans and venture capital to small firms. In September 2007, SBA announced a new loan initiative designed to stimulate economic growth in rural areas. The Rural Lender Advantage program, a part of SBA’s 7 (a) loan program, is aimed at encouraging rural lenders to finance small businesses. It is part of a broader initiative to boost economies in regions that face unique challenges due to factors such as declining population or high unemployment. Generally speaking, collaboration involves any joint activity that is intended to produce more public value than could be produced when the agencies act alone. Collaboration efforts are often aimed at establishing approaches to working together; clarifying priorities, roles and responsibilities; and aligning resources to accomplish common outcomes. On the federal level, collaboration efforts tend to occur through interagency agreements, partnerships with state and local governments and communities, and informal methods (e.g. networking activities, meetings, conferences, or other discussions on specific projects or initiatives). Agencies use a number of different mechanisms to collaborate with each other, including MOUs, procurement and other contractual agreements, and various legal authorities. Both SBA and USDA have used the authority provided by the Economy Act to facilitate collaboration. The Economy Act is a general statutory provision that permits federal agencies to enter into mutual agreements with other federal agencies to purchase goods or services and take advantage of specialized experience or expertise. It is the most commonly used authority for interagency agreements, allowing agencies to work together to obtain items or services from each other that cannot be obtained as conveniently or economically from a private source. SBA has also used contractual work agreements to collaborate with other federal agencies. For example, SBA has an agreement with the Farm Credit Administration (FCA) to examine SBA’s Small Business Lending Companies (SBLC). SBA oversees SBLCs, which are nondepository lending institutions that it licenses and that play a significant role in SBA’s 7(a) Loan Guaranty Program. However, SBLCs are not generally regulated or examined by financial institution regulators. SBA entered into a contractual agreement with the FCA in 1999 that tasked FCA with conducting safety and soundness examinations of SBA’s SBLCs. Under the agreement, FCA examined 14 SBLCs during a 1-year period. The exams were conducted on a full cost recovery basis and gave both agencies the option to terminate or extend the agreement after a year. The agreement allowed SBA to take advantage of FCA’s expertise in examining specialized financial institutions and offered FCA the opportunity to broaden its experience through exposure to a different lending environment. Additionally, using the disaster provisions under its traditional multifamily and single- family rural housing programs, Rural Development collaborated with FEMA in providing assistance to hurricane victims. Through this collaborative effort, Rural Development assisted victims of Katrina by (1) making multifamily rental units available nationwide; (2) providing grants and loans for home repair and replacement; and (3) providing mortgage relief through a foreclosure moratorium and mortgage payment forbearance. Rural Development also shared information with FEMA on USDA-owned homes for lease, developed an Internet presence to inform victims of available housing, and made resources available at Rural Development state offices to assist in these efforts. While SBA and Rural Development are not currently involved in a collaborative working relationship, both agencies have some experience collaborating with each other on issues involving rural development. Specifically, on February 22, 1977, SBA and Rural Development established an MOU for the purpose of coordinating and cooperating in the use of their respective loan-making authorities. Under the general guidelines of the agreement, appropriate SBA and Rural Development officials were to establish a liaison and periodically coordinate their activities to (1) define areas of cooperation, (2) assure that intended recipients received assistance, (3) enable both agencies to provide expeditious service, and (4) provide maximum utilization of resources. Again on March 30, 1988, SBA and Rural Development agreed to enter into a cooperative relationship designed to encourage and maximize effectiveness in promoting rural development. The MOU outlined each agency’s responsibilities to (1) coordinate program delivery services and (2) cooperate with other private sector and federal, state, and local agencies to ensure that all available resources worked together to promote rural development. SBA and Rural Development officials told us that the 1977 and 1988 agreements had elapsed and had not been renewed. Finally, in creating the RBIP in 2002, Congress authorized Rural Development and SBA to enter into an interagency agreement to create rural business investment companies. Under the program, the investment companies would leverage capital raised from private investors, including rural residents, into investments in rural small businesses. The legislation recommended that Rural Development manage the RBIP with the assistance of SBA because of SBA’s investment expertise and experience and because the program was modeled after SBA’s SBIC program. The legislation provided funding to cover SBA’s costs of providing such assistance. A total of $10 million was available for the RBIP in fiscal years 2005 and 2006. Rural Development and SBA conditionally elected to fund three rural business investment companies. However, according to SBA officials only one of these companies has been formed because of challenges in finding investment companies that can undertake such investments. Section 1403 of the Deficit Reduction Act of 2005 rescinded funding for the program at the end of fiscal year 2006. In March 2007, Rural Development began the process of exploring ways to continue the RBIP, despite the rescission. Both SBA and Rural Development have field offices in locations across the United States. However, Rural Development has more state and local field offices and is a more recognized presence in rural areas than SBA. Prior to its 1994 reorganization, which resulted in a more centralized structure, USDA had field staff in almost every rural county. Consistent with its reorganization, and as we reported in September 2000, USDA closed or consolidated about 1,500 county offices into USDA service centers and transferred over 600 Rural Development field positions to the St. Louis Centralized Servicing Center. What resulted was a Rural Development field office structure that consisted of about 50 state offices, 145 area offices, and 670 local offices. As part of the reorganization, state Rural Development offices were given the authority to develop their own program delivery systems. Some states did not change, believing that they needed to maintain a county-based structure with a fixed local presence to deliver one-on-one services to rural areas. Other states consolidated their local offices to form district offices. For example, when we performed our audit work in 2000 we found that Mississippi, which maintains a county- based field structure, had more staff and field offices than any other state. Today, Mississippi still maintains that structure and has a large number of field offices, including 2 area offices, 24 local offices and 3 sub-area offices. The Maine Rural Development office changed its operational structure, moving from 28 offices before the reorganization to 15 afterward. In 2000, it operated out of 3 district offices and currently has 4 area offices. SBA currently has 68 district offices, many of which are not located in rural communities or are not readily accessible to rural small businesses. For several years, SBA has been centralizing some of the functions of its district offices to improve efficiency and consistency in approving, servicing, and liquidating loans. Concurrently, SBA has also been moving more toward partnering with outside entities such as private sector lenders to provide services. SBA’s district offices were initially created to be the local delivery system for SBA’s programs, but as SBA has centralized functions and placed more responsibilities on its lending partners, the district offices’ responsibilities have changed. For example, the processing and servicing of a majority of SBA’s loans—work once handled largely by district office staff—have been moved from district offices to service centers. Moreover, as we reported in October 2001, there has been confusion over the mission of the district offices, with SBA headquarters officials believing the district office’s key customers are small businesses and district office staff believing that their key customers are the lenders who make the loans. Currently, SBA is continuing its workforce transformation efforts to, among other things, better define the district office role to focus on marketing and outreach to small businesses. We plan to evaluate the extent to which Rural Development offices may be able to help market SBA programs and services by making information available through their district offices. It appears that Rural Development has an extensive physical infrastructure in rural areas and expertise in working with rural lenders and small businesses. Our ongoing work will explore these issues in more depth, including looking at any incentives that exist for Rural Development and SBA to collaborate with each other. You requested that we conduct a review of the potential for increased collaboration between SBA and Rural Development, and we have recently begun this work. In general, the major objectives of our review are to determine: 1. The differences and similarities between SBA loan programs and Rural 2. The kind of cooperation that is already taking place between SBA and Rural Development offices, and 3. Any opportunities or barriers that may exist to cooperation and collaboration between SBA and Rural Development. To assess the differences and similarities between SBA loan programs and Rural Development business programs, we will review relevant SBA and Rural Development documents describing their loan and business programs. We will examine relevant laws, regulations, policies, and program rules, including eligibility requirements and types of assistance, funding levels, and eligible use of program funds. We will obtain data on both agencies’ loan processes and procedures, including any agency goals for awarding loans, documentation requirements, and loan processing times. To determine what cooperation has taken place between SBA and Rural Development, we will examine previous collaboration efforts and cooperation between the agencies in providing programs and services. We will also review documents such as MOUs, informal interagency agreements, and other documentation and will conduct interviews with SBA and Rural Development staff at headquarters and field offices to obtain a fuller understanding of these initiatives. To determine what opportunities or barriers exist to cooperation and collaboration between SBA and Rural Development, we will review relevant laws, regulations, and policies. We will review data from SBA and Rural Development on each agency’s field structure, including office space and personnel, and interview relevant parties on the advantages and disadvantages to co-locating offices. We plan to interview headquarters and field office staff at each agency about past collaboration efforts and any plans to work collaboratively in the future. We also plan to obtain the perspectives of select lenders that participate in SBA loan programs and Rural Development business programs. We reported previously in March 2007 and October 2005 that effective collaboration can occur between agencies if they take a more systematic approach to agreeing on roles and responsibilities and establishing compatible goals, policies, and procedures on how to use available resources as efficiently as possible. In doing so, we identified certain key practices that agencies such as SBA and USDA could use to help enhance and sustain their efforts to work collaboratively. These practices include (1) defining and articulating a common outcome; (2) establishing mutually reinforcing or joint strategies; (3) identifying and addressing needs by leveraging resources; (4) agreeing on roles and responsibilities; (5) establishing compatible policies, procedures, and other means of operating across agency boundaries; (6) developing mechanisms to monitor, evaluate, and report on results; (7) reinforcing agency accountability for collaborative efforts; and (8) reinforcing individual accountability for collaborative efforts. As part of our ongoing work, we plan to review the extent to which the eight key practices relate to possible opportunities for SBA to increase collaboration with Rural Development. For example, we plan to explore the extent to which these practices are necessary elements for SBA to have a collaborative relationship with Rural Development. We are continuing to design the scope and methodology for our work, and we expect to complete this design phase by February 2008. At that time, we will provide details of our approach as well as a committed issuance date for our final report. Mr. Chairman, Ranking Member Fortenberry, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions that you may have. For additional information about this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Paul Schmidt, Assistant Director; Michelle Bowsky; Tania Calhoun; Emily Chalmers; and Ronald Ito. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration (SBA) and Department of Agriculture (USDA) Rural Development offices share a mission of attending to underserved markets, promoting economic development, and improving the quality of life in America through the promotion of entrepreneurship and community development. In the past, these agencies have had cooperative working relationships to help manage their respective rural loan and economic development programs. At this subcommittee's request, GAO has undertaken a review of potential opportunities for SBA to seek increased collaboration and cooperation with USDA Rural Development (Rural Development), particularly given Rural Development's large and recognizable presence in rural communities. In this testimony, GAO provides preliminary views on (1) mechanisms that SBA and USDA have used to facilitate collaboration with other federal agencies and with each other; (2) the organization of SBA and USDA Rural Development field offices; and (3) the planned approach for GAO's recently initiated evaluation on collaboration between SBA and Rural Development. GAO discussed the contents of this testimony with SBA and USDA officials. While SBA and Rural Development are not currently involved in a collaborative working relationship, SBA and Rural Development have used a number of different mechanisms, both formal and informal, to collaborate with other agencies and each other. For example, both agencies have used the Economy Act--a general statutory provision that permits federal agencies, under certain circumstances, to enter into mutual agreements with other federal agencies to purchase goods or services and take advantage of specialized experience or expertise. SBA and USDA used the act to enter into an interagency agreement to create rural business investment companies to provide equity investments to rural small businesses. For this initiative, Congress also authorized USDA and SBA to administer the Rural Business Investment Program to create these investment companies. However, funding for this program was rescinded at the end of fiscal year 2006. SBA and Rural Development have also used other mechanisms to collaborate, including memorandums of understanding (MOU), contractual agreements, and other legal authorities. For instance, Rural Development has collaborated with the Federal Emergency Management Agency in providing assistance to the victims of Hurricane Katrina using the disaster provisions under its multifamily and single-family rural housing programs. To collaborate with each other, in the past SBA and Rural Development have established MOUs to ensure coordination of programs and activities between the two agencies and improve effectiveness in promoting rural development. Both SBA and Rural Development have undergone restructuring that has resulted in downsizing and greater centralization of each agency's field operations. Currently, SBA's 68 field offices--many of them in urban centers--are still undergoing the transformation to a more centralized structure. Rural Development has largely completed the transformation and continues to have a large presence in rural areas through a network of hundreds of field offices. The program's recognized presence in rural areas and expertise in the issues and challenges facing rural lenders and small businesses may make these offices appropriate partners to help deliver SBA services. GAO has recently begun a review of the potential for increased collaboration between SBA and Rural Development. In general, the major objectives are to examine the differences and similarities between SBA loan programs and Rural Development business programs, any cooperation that is already taking place between SBA and Rural Development, and any opportunities for or barriers to collaboration.
Lobbying registrations and reports are required under the Lobbying Disclosure Act of 1995 (the Act) as amended by the Honest Leadership and Open Government Act of 2007 (HLOGA) to disclose the identities of people attempting to influence the government, the subject matters of their attempts, and the amounts of money they spend to accomplish their goals. The Act requires that lobbyists register with the Secretary of the Senate and the Clerk of the House and file periodic reports disclosing their activities. The Act was amended by HLOGA to make those reports due quarterly (they had previously been due semiannually). HLOGA requires lobbyists to disclose whether they held an official covered position in the past 20 years (rather than the 2 years the Act had previously required), whether the client is a state or local government, and whether any members of a coalition or association actively participated in the lobbying activities. Under HLOGA, lobbyists are required to file these registrations and reports electronically with Congress through a single entry point (as opposed to separately with the Secretary of the Senate and the Clerk of the House as was done prior to HLOGA). The Act, as amended by HLOGA, also provides that registrations and reports must be available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. The Act defines “lobbyists” and “lobbying activities” and imposes requirements on the reporting of those activities. Under the Act, a lobbyist can be an individual, a lobbying firm, or an organization that has employees lobbying on its own behalf, depending on the circumstances. Lobbyists are required to file a registration with the Secretary of the Senate and the Clerk of the House for each client on whose behalf a lobbying contact is made if a minimum dollar threshold is passed. The registration must list the name of the organization, lobbying firm, or self- employed individual lobbying on that client’s behalf. In addition, the registration and subsequent reports must list the individuals who acted as lobbyists on behalf of the client during the reporting period. For reporting purposes, a lobbyist is defined as a person who has made two or more lobbying contacts and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during any quarter. Registrations and reports must also identify any covered official positions a lobbyist held in the previous 20 years. The registration and subsequent quarterly reports must also disclose the name of and further information about the client. The lobbyist is required to disclose any foreign entities with interest in the client. The lobbyist must report if the client is a state or local government. When the client is a coalition or association, the lobbyist must identify any constituent organization that contributes more than $5,000 for lobbying in a quarter and actively participates in the planning, supervision, or control of lobbying activities. The registration and subsequent reports may either list each organization or make the list available on the coalition’s or association’s Web site and disclose the Web address in the report. Lobbying registrations and reports must include lobbying activity details such as the general issue area and the specific lobbying issues. The lobbyist must also disclose which federal agencies and house(s) of Congress the lobbyist contacted on behalf of the client during the reporting period. Finally, the registrant must report the amount of money that was spent on lobbying for the client during the reporting period. The lobbying income or expenses disclosed on the reports are to be rounded to the nearest $10,000. A lobbying firm, or any other organization that is hired to lobby on behalf of an entity other than itself, must report the amount of income related to lobbying activities received from the client during the quarter. An organization that has employees who lobby on its behalf must report the expenses incurred in relation to lobbying activities during the quarter. Organizations may use one of three accounting methods to determine their expenses: the Act’s definitions of lobbying expenses; the Internal Revenue Code definitions for non-deductible business expenses; or, if they are a 501(c) nonprofit organization, the definitions under that portion of the Internal Revenue Code. We estimate that lobbyists could provide accurate supporting information—in either written or verbal form—on income or expenses for at least 95 percent of all first quarter reports filed that required this information. Neither the Act nor lobbying guidance specifies any standards or requirements for lobbyists to maintain records or documentation to support information disclosed in their reports. Nonetheless, lobbyists were able to provide written or oral support for all required elements of individual reports we examined. However, the extent to which lobbyists could provide written documentation varied for different aspects of the reports. The extent to which lobbyists could provide written documentation to support elements of their filings was quite high for some elements such as income or expenses, but notably lower for other elements such as the individuals who acted as lobbyists. The Act requires lobbyists to make a good faith estimate of either all income received from the client or total expenses of lobbying activities. Although the Act does not contain any special record-keeping provisions, guidance from both the Secretary of the Senate and Clerk of the House recommends that lobbyists retain copies of their filings and supporting documentation for at least 6 years after reports are filed. We estimate that lobbyists have written documentation to support income or expenses for approximately 91 percent of first quarter reports that required this information. We also estimate that lobbyists could provide written support for issues lobbied for approximately 58 percent of reports and for 47 percent of report information on which house of Congress or agencies were lobbied. Lobbyists had written documentation to support information about the individuals who acted as a lobbyist for 35 percent of reports. Sample sizes for affiliated organizations, foreign entity interests, and the names of individuals no longer acting as lobbyists were too small to provide reliable estimates of levels of written documentation and verbal explanations in support of first quarter reports that required this information. Our estimates of the levels of lobbyists’ documentation are based on our review of 100 reports, which included 93 reports of lobbying activity and 7 “no activity” reports. Table 1 provides the actual numbers from our sample that formed the basis for our estimates. In our meetings with lobbyists, the types of documentation we were provided and processes to track lobbying activity to support filings varied widely among lobbyists, with some of the lobbyists in our sample providing comprehensive written documentation supporting information in the reports they filed and others providing less or no documentation, often adding verbal explanations. Most lobbyists supported their reported information on lobbying income with billing statements, invoices, or contracts. Some lobbyists supported their reported information on lobbying contacts and issues using e-mails or summaries of meetings with Congress or federal agencies. Lobbyists who had little or no written documentation provided verbal explanations to support the reported information. Our sample also showed that just one of the disclosure reports did not match the information provided for one of the specific elements we examined during our review. In this case, the lobbyists we visited realized that the lobbying expenses dollar amount had been reported incorrectly. That lobbyist subsequently filed an amended report to correct the amount of expenses disclosed. Although the legislation and guidance do not require lobbyists to maintain records or documentation to support information disclosed in their reports, several of the lobbyists we spoke to during our review expressed interest in obtaining advice or information on documentation that would best support their filings. Some lobbyists, for example, told us they would like information from other firms on how to set up tracking and compliance systems before filing deadlines. Some felt that a checklist of useful information and documentation to consider when filing would be helpful. Lobbyists had systems to track lobbying contacts and the amount of time spent on lobbying activities for an estimated 49 percent of first quarter reports. Some lobbyists had detailed tracking and accounting systems. One large lobbying firm, for example, provided us with a demonstration of a database system that they developed to track their lobbying activity in detail. The system captured data provided by the firm’s many lobbyists about their contacts, time charges, issues lobbied, and clients and generated monthly e-mail reports for lobbyist so that they could verify the information and report any discrepancies. Other firms used lobbying activity tracking systems that were integrated with their billing systems. Of these, some told us that they had recently augmented their billing codes to better track lobbying activity to report under HLOGA. In contrast, the other lobbyists we spoke to did not have detailed tracking or accounting systems for lobbying activity. These lobbyists estimated elements of their lobbying activity as necessary in preparing their reports. Lobbyists who lobbied and performed non-lobbying consulting services for clients for the same monthly retainer estimated the amount of time spent lobbying versus providing consulting services in preparing their reports. In addition, a number of firms reported they were uncertain of whether to track the time and involvement of volunteers or members of their boards of directors. Six of the lobbyists within our sample reported some aspects of lobbying activity that did not take place. Some lobbyists told us that they reported individuals as lobbyists even though the individuals were not involved in any lobbying activities for the client in question during the reporting period; reported that they had lobbied certain federal agencies even though they indicated during our visits that such activity did not take place during the reporting period; and filed reports but stated that they actually had not engaged in any lobbying activity for the client in question during the reporting period. In addition, three lobbyists rounded the amount of their lobbying income up to the next $10,000, rather than to the nearest $10,000 as instructed. A few of the lobbyists cited unclear and vague law and guidance as a reason for reporting more information than their lobbying activity required. But other lobbyists told us that they added information as part of a cautious approach to the filing process, to lessen the chances that they would fail to fully report. Another reason for reporting additional data was that lobbyists did not want to edit their reports each quarter to reflect what they perceived to be minor changes—such as changes in the names of individuals acting as lobbyists on specific issues or in lists of federal agencies lobbied—and chose to leave this kind of information in their report in case it should become applicable again for a future reporting period. Lobbyists who registered in the first quarter of 2008 largely filed disclosure reports for the reporting period as required. To determine whether new registrants were meeting the requirement to file, we matched newly filed registrations from the House Lobbyists Disclosure Database to their corresponding first quarter disclosure reports using an electronic matching algorithm that allowed for misspelling and other minor inconsistencies between the registrations and reports. Our analysis of the 1,460 new registrations showed that the majority (1,358) had a clearly corresponding disclosure report on file, indicating that the requirement for these lobbyists to file reports for specific clients was generally met. However, we could not identify corresponding first quarter reports of lobbying activity for 102 (approximately 7 percent) of the 1460 new registrations. We brought this matter to the attention of the Secretary of the Senate and Clerk of the House so that they could follow up with the lobbyists to resolve any potential compliance issues. Staff of the Secretary of the Senate and Clerk of the House told us that while the newly registered lobbyists for whom we could not identify corresponding reports may not have filed a report, it is possible that they filed reports with information that did not fully match their registrations. For example, if a client’s name did not precisely match the name listed on the lobbyist’s registration, it would be difficult to match the registrants to their corresponding reports. Figure 1 below illustrates the number of registrations for which we were unable to find a corresponding report. Some lobbyists identified certain challenges to their compliance with the Act, including uncertainty about how to report various pieces of information about their organizations and lobbying activity. Our random sample of 100 quarterly reports included 86 separate lobbyists (some lobbyists had reports for more than one client in our sample). About half (41 out of 86) of these lobbyists said that they needed further information specific to their own situations, in addition to the law and the guidance provided by the Secretary of the Senate and the Clerk of the House. Many lobbyists told us that when they had questions or needed clarification regarding the law and associated guidance, the staffs of the Secretary of the Senate and the Clerk of the House were helpful at providing needed assistance. For example, some lobbyists told us: They were confused about whether and under what circumstances members of a trade association had to be listed under the requirement to report certain affiliated organizations. They did not know how to report foreign entity interest in the client if the client is a U.S. corporation with an international parent company. The issue area codes used to indicate types of lobbying activity are not well defined and may overlap, requiring them to use their judgment to choose between codes that were not entirely applicable to their individual situations. They were not sure which of their activities constituted “lobbying activity” as defined in the Act. They did not know how much detail they needed to provide on the specific lobbying issues for each client. Some lobbyists also cited certain administrative constraints as challenges to their compliance with the Act. Under HLOGA, the deadline for filing disclosure reports is 20 days after each reporting period, or the first business day after the 20th day if the 20th day is not a business day. Prior to HLOGA, the deadline for filing disclosure reports was 45 days after the end of each reporting period. Some lobbyists told us: The new 20-day deadline was difficult to meet because of limitations of their own internal billing or record-keeping systems. The increased frequency of reporting presented an administrative burden. They found the increased frequency of reporting to be beneficial for their own record-keeping. Some lobbyists told us they took added steps to help ensure their compliance with the new requirements of HLOGA. These actions included conducting internal training sessions for their staff, hiring outside counsel to give presentations and provide training, and attending training seminars and workshops offered by other lobbying organizations, law firms, and membership organizations in the lobbying community. In this regard, a vehicle for lobbying organizations to share information may assist some lobbyists in better ensuring the accuracy and completeness of information in their lobbying disclosure reports. HLOGA includes the sense of Congress that the lobbying community should develop proposals for multiple organizations that could provide a number of programs to assist compliance with lobbying disclosure, such as creating standards for the organizations appropriate to the type of lobbying and individuals to be served and providing training and educational materials on reporting and disclosure requirements. The creation of such organizations may assist the lobbying community with minimizing confusion and clarifying the information needed to comply with the Act. Officials from the United States Attorney’s Office for the District of Columbia (the Office) informed us that resources are assigned to lobbying compliance issues based on competing priorities within the Office. In addition to responding to referred cases of lobbyist noncompliance, the Office is responsible for prosecuting all criminal cases in the District of Columbia including cases that would be otherwise prosecuted by state authorities in other jurisdictions. The Office also prosecutes and defends all civil cases in the District of Columbia in which the United States is a party, and initiates legal process to collect debts owed to the federal government. It is the largest U.S. Attorney’s Office with more than 350 Assistant U.S. Attorneys and more than 350 support personnel for carrying out the multitude of the Office’s responsibilities. Officials from the Office stated that most tasks on referred lobbying compliance cases are administrative, such as researching and responding to referrals and sending notices to the lobbyists requesting that they file reports or correct reported information. The Office has five staff members who work on lobbying noncompliance issues in addition to other duties: a deputy chief, three assistant U.S. Attorneys, and an investigator. Officials stated that the Office’s other resources in its civil work are dedicated to higher priority activities with a higher return to the taxpayer, such as health care fraud. If the Office decides to pursue a case against a referred lobbyist, penalties may be imposed on lobbyists who intentionally fail to (1) remedy a defective filing within 60 days after notice of such a defect by the Senate Secretary or House Clerk’s Office and the U.S. Attorney’s Office or (2) comply with any other provision of the Act. Penalties, recently increased by HLOGA for offenses committed after January 1, 2008, involve a civil fine of not more than $200,000 and criminal penalties of not more than 5 years in prison. Criminal penalties may be imposed against lobbyists who knowingly and corruptly fail to comply with the Act. Officials from the Office stated that they have sufficient civil and criminal statutory authorities to enforce the Act. The Office receives referrals of noncompliance from the Secretary of the Senate and Clerk of the House. The Secretary of the Senate and Clerk of the House send referrals after they have twice contacted the lobbyists by letter to inform them of the need to remedy an error or file a missing report. Extended periods of time may lapse between when the Secretary of the Senate and the Clerk of the House send the first contact letter and when they make referrals to the U.S. Attorney’s Office. For example, the most recent referrals were received in April 2008 for the filing period that ended in 2006. According to the Office, lobbyists often respond to a contact letter from the Secretary of the Senate and Clerk of the House after referrals have been received by the Office. Before the Office sends out its own letters requesting compliance to lobbyists once referrals are received, its staff first reviews the Secretary of the Senate and Clerk of the House databases to determine if that lobbyist has already resolved the compliance issue. Once this has been done, the Office will send a letter to each lobbyist informing the lobbyist of the need to correct the problem or file reports. The Office attempts to verify the lobbyist’s address where letters were returned or no response was received after 60 days. Thereafter, the Office makes a determination whether to pursue a case of noncompliance with HLOGA. Office officials told us that the work involved in this entire process takes a considerable amount of time and resources. For an overview of the referral process, see figure 2. Referrals have increased in recent years, and as a result, the Office’s workload relative to lobbying disclosure has increased. The Secretary and Clerk automated their referral process in 2004, and began transmitting referrals to the Office electronically in 2006. According to Office officials, the automation of the referral process likely contributed to a significant increase in the number of referrals. Since 2004, the Office has received more than 4,000 referrals from the Secretary of the Senate and Clerk of the House. Because of a lack of consistent records in past years, the Office was unable to provide complete and accurate data for each reporting period prior to 2006 to indicate the number of letters it sent to lobbyists asking them to comply with the Act, and the number of lobbyists who complied after the referral was received. Office officials indicated that such information would be useful to help them better track their workload and make resource decisions. The Office has not received referrals for the 2007 reporting period. The Office received more than 1,000 for the 2003, 2004, and 2005 reporting periods. In September 2007, the Office received 449 referrals for the mid- year 2006 reporting period. The most recent set of referrals was sent by the Secretary of the Senate in April 2008 and totaled approximately 330 referrals, all of which were for the 2006 year-end reporting period. Office officials provided additional information on these referrals, and explained that they consolidated the 2006 year-end referrals for lobbyists that have more than one report that is noncompliant, leaving 268 lobbyists with one or more filings. Officials researched the Senate database and determined that 16 of the 268 lobbyists filed a report after the Office received the referrals from the Senate. As a result, the Office has recently sent 252 letters to lobbyists asking them to comply with the Act by promptly filing a report or an amendment to correct an issue that has been identified. The Office does not have a formal, structured approach that enables them to readily prioritize matters that should be the focus of its resources. For example, it does not identify those lobbyists who continually fail to file or otherwise do not comply with requirements of the Act. In commenting on a draft of this report, Office officials stated they have recently begun to redesign their computer database to more accurately track referrals and identify trends in past compliance matters in order to create a more structured approach for assigning its resources. Office officials believe that once this is accomplished it should provide a foundation that will allow the Office to better focus its lobbying compliance efforts. Such a structured approach becomes increasingly important in light of the Office’s growing workload. The Office has been primarily focused on sending letters to lobbyists who have potentially violated the Act, requesting that they comply with the law and promptly file the appropriate disclosure documents. Resolution typically involves the lobbyists coming into compliance. Office officials told us that since the Act was passed in 1995, they have settled with three lobbyists and collected civil penalties totaling about $47,000. All of the settled cases involved a failure to file. Under HLOGA, DOJ is required to file an enforcement report with Congress after each semiannual period beginning on January 1 and July 1, detailing the aggregate number of enforcement actions taken by DOJ under the Act during the semiannual period and, by case, any sentences imposed. On September 18, 2008, DOJ filed its first report for the semiannual period ending June 30, 2008. Most registered lobbyists could provide support for their filings and newly registered lobbyists largely met the reporting requirements. However, several lobbyists in our sample reported some lobbying activity that did not occur, a circumstance that diminishes the value of information reported to Congress. In addition to a lack of clarity in the available guidance, the absence of documentation requirements and the fact that some lobbyists estimate amounts to be included in their reports may have resulted in some inaccurate information reported to Congress. Based on these observations, we believe that the lobbying community could benefit from creating an organization to share examples of best practices of the types of records maintained to support filings and use this information gathered over an initial period to formulate minimum standards for recordkeeping; provide training for the lobbying community on reporting and disclosure requirements, intended to help the community comply with the Act; and report annually to the Secretary of the Senate and the Clerk of the House on opportunities to clarify existing guidance and ways to minimize sources of potential confusion for the lobbying community. The recent increase in public and congressional attention on lobbyists and their interactions with government officials and the increase in disclosure requirements indicate the importance of enforcing the Act. To better address potential issues of noncompliance, the Department of Justice and the U.S. Attorney’s Office’s limited resources need to be targeted toward the most significant and repeated cases of noncompliance. Without a structured approach, the Office does not have the assurance that it is investing its limited resources in the most useful manner. Office officials believe that the recently initiated effort under way to redesign its computer database to more accurately track referrals should provide a structured approach to address problem filers. We recommend the U.S. Attorney for the District of Columbia Complete efforts to develop plans for a structured approach to focus limited resources on those lobbyists that continually fail to file as required or are otherwise not in compliance. Such an approach should require the Office to track the referrals when they are made, record reasons for the referrals, record the actions taken to resolve them, and assess the results of actions taken. We provided a draft of this report to the Attorney General for the Department of Justice (DOJ) for review and comment. On behalf of the DOJ, the U.S. Attorney for the District of Columbia provided us with written comments (see app. III). The U.S. Attorney for the District of Columbia concurred with our recommendation and stated that the office has devoted appropriate attention to enforcing the Act and plans to continue to develop an approach to focus limited resources on lobbyists that continually fail to file as required or otherwise fail to comply with the Act. The U.S. Attorney noted that his office is taking or planning to take actions that should allow the office to develop a more structured approach as we recommended. Specifically, the U.S. Attorney indicated that the office plans to enhance its database to improve tracking and has assigned an additional staff member to assist with lobbying compliance matters. The Office of the U.S. Attorney also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Attorney General, Secretary of the U.S. Senate, Clerk of the U.S. House of Representatives, and other interested congressional committees and members. Copies of this report will be made available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. Please contact George Stalcup at (202) 512-9490 or stalcupg@gao.gov if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Consistent with the requirements of the Honest Leadership and Open Government Act (HLOGA), our objectives were to determine the extent to which lobbyists can demonstrate compliance by providing support for information on registrations and reports filed in response to requirements of the amended Lobbying Disclosure Act (the Act); identify the challenges lobbyists cite in complying with the Act and suggestions for improving compliance; and describe the process of referring noncompliance cases to the Department of Justice (DOJ) and the resources and authorities available to DOJ in its role in enforcing compliance with the Act. To respond to the requirements of HLOGA, we used information in disclosure databases maintained by the Secretary of the Senate and the Clerk of the House of Representatives. To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and spoke to officials responsible for maintaining the data. Although registrations and reports are filed through a single Web portal, each chamber subsequently receives copies of the data and follows different data cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases that result from chamber differences in data processing; however, we do not have reason to believe that the content of the two systems would vary substantially. While we determined that the both the House and Senate disclosure data were sufficiently reliable for identifying a sample of first quarter reports and for assessing whether newly filed registrants also filed required reports, we chose to use data from the Clerk of the House for ease of processing. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House—both of which have key roles in the lobbying disclosure process—although we met with officials from each office, and they provided us with general background information at our request. To assess the extent to which lobbyists’ could provide evidence of their compliance with reporting requirements, we examined a random sample of 100 of the 19,861 first quarter reports filed by the April 21 deadline and available in the House database as of our download date of May 12, 2008. We later determined that a portion of the reports in the database were amendments or test cases, and thus 17,801 first quarter reports were in scope for our sample. Our sample is based on random selection, and it is only one of a large number of samples that we might have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples that we could have drawn. All percentage estimates in this report have 95 percent confidence intervals of within plus or minus 11 percentage points of the estimate itself, unless otherwise noted. We contacted each lobbyist in our sample and asked them to provide support for eight key elements in their reports, including the amount of money received for lobbying activities; the amount of money spent on lobbying activities; the specific issues on which they lobbied; the houses of Congress and federal agencies which they lobbied; the names of individuals who acted as lobbyists for the client listed on the report; the names of foreign entities with interest in the client; the names of individuals no longer acting as a lobbyist for the client; and the names of any member organizations of a coalition or association that actively participated in lobbying activities on behalf of the client. Our work to examine lobbyists’ compliance was limited to reviewing support provided by the lobbyists, which included both documentation and oral explanations. Neither the law nor guidance currently specifies any documentation requirements in relation to information reported under the Act. To determine if the Act’s requirement for registrants to file a report in the quarter of registration was met during the first quarter of 2008, we matched the 1460 records in the House’s first quarter registration file as of May 13, 2008, to those in the first quarter report filings using House ID, Senate ID, and text matching procedures. We examined all first quarter registrations filed and signed on March 31, 2008, or before. We deleted 31 duplications of multiple registrations and selected only the most recent registration each lobbyist filed for each particular client. We electronically matched registrations with reports using House and Senate identification numbers, lobbyist organization name, and client name. We identified 94 perfect matches, then relaxed our criteria to allow for minor typos and missing identification codes, and identified an additional 1233 registrations with corresponding reports in the first quarter filings. We could not readily identify matches in the report database for the remaining 102 registrations. We obtained views from lobbyists included in our sample of reports on any challenges to compliance and how the challenges might be addressed. To describe the process used in referring cases to the Department of Justice and provide information on the resources and authorities used by the department in its role in enforcing compliance with HLOGA, we interviewed department officials, obtained information from those involved in the referral process, and obtained data on the number of cases referred, pending, and resolved. Our objectives did not include identifying lobbyists that failed to register and report in accordance with HLOGA requirements, or whether for those lobbyists that did register and report, all lobbying activity was disclosed. We conducted this performance audit from March 2008 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The random sample of lobbying disclosure reports we selected was based on client names. In addition to the contacts named above, Robert Cramer, Associate General Counsel; Bill Reinsberg, Assistant Director; Michael Volpe, Assistant General Counsel; Katrina Taylor, Analyst-in-Charge; Christopher Backley; Ellen Grady; Anna Maria Ortiz; Melanie Papasian; Sabrina Streagle; and Greg Wilmoth made key contributions to this report. Assisting with lobbyist’s file reviews and interviews were Stephen Ander, Amy Bowser, Dewi Djunaidy, Daniel Dunn, Karin Fangman, Melanie Helser, Ashleigh Kades, Olivia Leonard, Andrea Levine, Ryan Little, Mary Martin, Jeff McDermott, Jackie Pontious, Wes Sholtes, A.J. Stephens, and Tammy Stenzel.
The Honest Leadership and Open Government Act (HLOGA) of 2007 amends the Lobbying Disclosure Act of 1995 by doubling the frequency of lobbyists' reporting and increasing criminal and civil penalties. This is GAO's first report in response to the Act's requirement for GAO to annually (1) determine the extent to which lobbyists can demonstrate compliance with the Act by providing support for information on their registrations and reports, (2) describe challenges identified by lobbyists to complying with the Act, and (3) identify the process for referring cases to the Department of Justice and the resources and authorities available to effectively enforce the Act. GAO reviewed a random sample of 100 reports filed by lobbyists during the first quarter of calendar year 2008. This methodology allowed GAO to generalize to the population of 17,801 reports filed. GAO also met with lobbyists regarding their filings and with Department of Justice officials regarding resources and authorities GAO estimates that lobbyists could provide accurate supporting information--in either written or verbal form--on income or expenses for at least 95 percent of all first quarter reports filed requiring this information. The legislation and guidance do not contain requirements for lobbyists to create or maintain documentation in support of the registrations or reports they file. Nonetheless, lobbyists were able to provide written or oral support for all required elements of individual reports GAO examined. However, the extent to which lobbyists could provide written documentation varied for different aspects of the reports. GAO estimates that lobbyists have written documentation to support income or expenses for approximately 91 percent of first quarter reports that required this information. In contrast, for a separate element listing the person who acted as a lobbyist, GAO estimates that lobbyists have written documentation for 35 percent of reports that required this information. Also, the majority of lobbyists newly registered with the Secretary of the Senate and Clerk of the House in the first quarter of 2008 also filed required disclosure reports for the period. However, for about 7 percent of the registrants, GAO could not identify a clear, corresponding report on file for their lobbying activity, likely because a report was not filed or because of a mismatch of information in reports that were filed. While a number of lobbyists felt that existing guidance for filing required registrations and reports was sufficient, others believed additional clarifications, such as on issue area activity codes and on how to report various pieces of information about their organizations and lobbying activity, were needed. Several lobbyists also expressed uncertainty about what constitutes reportable lobbying activity under the law and how much detail they needed to provide on the specific lobbying issues for each client. The Act included the sense of Congress that the lobbying community should create an organization to develop training and standards for lobbying. GAO's work reinforces that such an organization would be beneficial and could share best practices and provide training on the types of records to support filings and report annually on opportunities to clarify existing guidance. The United States Attorney's Office for the District of Columbia assigns its resources for lobbying compliance issues based on competing priorities within the Office. The Office has five staff members, including a Deputy Chief, three assistant U.S. attorneys, and one investigator who perform lobbying non-compliance follow-up, among other duties. Officials from the Office told us they have sufficient civil and criminal statutory authorities to enforce the Act. The department's lobbying compliance workload has increased in recent years. However, it currently lacks a structured approach for targeting its resources to the most significant noncompliance cases. Such an approach will require the Office to track the referrals when they are made, record reasons for the referrals, record the actions taken to resolve them, and assess the results of actions taken. The Office has recently begun to redesign its computer database to more accurately track referrals received in past years to identify trends in past compliance matters.
Under the Federal Food, Drug, and Cosmetic Act (FDCA), FDA is responsible for reviewing the safety and effectiveness of medical devices before they go to market (premarket review) and ensuring that they remain safe and effective afterwards (postmarket oversight). Manufacturers intending to sell medical devices in the United States, including reprocessed SUDs, must register with FDA and provide information listing the devices they intend to market. FDA considers establishments engaged in reprocessing (that is, any activity needed to render a used SUD ready for use on a subsequent patient) to be the manufacturers of those reprocessed SUDs. Establishments, including reprocessing establishments, are required to update their registrations annually and their device listings twice each year. FDA’s premarket review activities for devices—that is, for reusable devices, for originally manufactured SUDs, and for reprocessed SUDs— mainly involve analyzing information submitted by those establishments that plan to market devices, including clinical or engineering documents and proposed labeling and instructions for use. Devices encompass a wide range of complexity and potential risk, and higher-risk or innovative devices require a more rigorous level of premarket review than lower-risk devices. For example, many relatively simple, low-risk devices, such as scissors used for medical purposes, are exempt from premarket review requirements. For other devices, such as catheters, manufacturers are required to submit documentation for FDA’s review and receive clearance before they may be marketed. For all devices, FDA has assigned about 1,700 device types into one of three classes based on the level of risk posed and controls necessary to ensure their safety and effectiveness. Class I (low-risk) devices include such things as elastic bandages. Class II (medium-risk) devices include items like powered bone drills. Class III (high-risk) devices include those that support or sustain human life such as balloon angioplasty catheters. Most class I devices are exempt from premarket submission requirements set forth in Section 510(k) of the FDCA (premarket notification). For most class II devices, manufacturers are required to submit a premarket notification report. The premarket notification report must provide evidence that the device is substantially equivalent to a device already on the market before FDA will allow it to be marketed. For class III devices, manufacturers are required to submit an application for premarket approval, which must provide evidence, including clinical data, demonstrating that the device is safe and effective. FDA’s postmarket surveillance activities mainly involve inspecting device establishments and collecting and analyzing reports about device safety. FDA inspects registered device establishments, including reprocessing establishments, to assess compliance with applicable quality control and adverse event reporting regulations, among others. In addition to inspecting device establishments, FDA’s postmarket activities include collecting and analyzing reports of device-related adverse events to ensure that devices already on the market remain safe and effective. Manufacturers are required to report device-related deaths, serious injuries, and certain malfunctions to FDA. In addition, user facilities, such as hospitals and nursing homes, are required to report device-related deaths to FDA and to the device manufacturer, and to report serious injuries to the manufacturer or, if the manufacturer is unknown, to FDA. Both manufacturers and user facilities may also voluntarily report to FDA less-serious device-related events that are not likely to result in subsequent serious injuries if the malfunction were to recur. FDA maintains databases that include both mandatory and voluntary reports of device- related adverse events, which agency officials can search to conduct research on trends or emerging problems with device safety. FDA scientists review these reports, request follow-up investigations, and determine whether further action is needed to ensure patient safety. Such action may include product recalls, public health advisories to notify health care providers and the public of potential device-related health and safety concerns, or requiring a manufacturer to change the instructions in its device labeling. FDA officials told us that the vast majority of reports involve a device malfunction that has the potential to cause a death or serious injury if the malfunction were to recur, even though there was no death or serious injury in the reported event. FDA has information on domestic reprocessing establishments and the devices they are reprocessing or considering for reprocessing, but it does not have data on the extent of actual production or on where the devices are being used. Collectively, according to FDA, 11 establishments were actively reprocessing or planning to reprocess more than 100 different types of SUDs in the United States as of July 2007. (See app. II for a list of the types of SUDs that have been listed by reprocessing establishments.) While definitive information on the size of the reprocessed SUD market is not available, representatives of the reprocessing industry estimate that 3 of the 11 registered reprocessing establishments (2 of which are owned by the same firm) account for the vast majority of the total reprocessing business in the United States. Only one hospital was included among the 11 active reprocessing establishments identified by FDA. Our inquiries with hospital representatives and federal agencies that administer hospitals, such as the Department of Veterans Affairs, indicated use of reprocessed SUDs among hospitals varies. FDA identified 11 establishments actively reprocessing SUDs in the United States as of July 2007, 1 of which was a hospital. Seven establishments engaged exclusively in reprocessing or in reprocessing and one other activity, such as contract sterilizer. According to representatives of the reprocessing industry, 3 of these 7 account for about 90 percent of all SUD reprocessing. Four of the 11 reprocessing establishments registered with FDA to undertake three or more FDA-regulated activities including distribution or manufacturing. For example, 1 reprocessing establishment manufactures over 80 different types of medical devices but reprocesses only one type of SUD that it also manufactures. Four of the 11 establishments, including the hospital, have each listed only one type of reprocessed SUD. The more than 100 types of devices that reprocessing establishments reported actively reprocessing or planning to reprocess represent devices with a range of intended uses, some more invasive than others. For example, compression sleeves, which are used to provide intermittent compression to a patient’s limbs to help prevent postoperative blood clots from forming, are intended to make contact with patients’ skin only, not to enter the body. In contrast, surgical devices such as orthopedic drill bits or surgical saw blades are intended for use in internal parts of the body. Electrophysiology catheters are inserted into the heart to measure cardiac rhythm and have been reprocessed for over 20 years. While we found no reliable data on the volume of reprocessed SUDs by device type, representatives of 3 large reprocessing establishments have stated that noninvasive devices such as compression sleeves account for the greatest volume of their overall business, with surgical devices representing a much smaller share of their business. Data on the exact size of the SUD reprocessing industry—in terms of the volume or value of reprocessed SUDs sold—and how it compares to the original SUD industry or the overall medical device industry are not available. FDA neither collects nor reports on the volume or value of reprocessed SUDs sold; the agency also does not maintain data on the volume or value of original SUDs or on all medical devices sold. Regarding private sector data sources, we found that data on the SUD reprocessing industry were either not available or were considered proprietary by industry sources. Similarly, representatives of trade associations that represent establishments that manufacture original SUDs and reusable devices could not provide data on the proportion of the overall medical device industry that consists of devices labeled for single-use and could be reprocessed. Two FDA studies indicate that hospital use of reprocessed SUDs varies. In 2002, FDA reported that about one-fourth of U.S. hospitals used at least one type of reprocessed SUD, with larger hospitals being more likely to do so. To develop this estimate, FDA surveyed more than 5,000 hospitals. Nearly half of responding hospitals with more than 250 beds reported using reprocessed SUDs, compared with 12 percent of responding hospitals with fewer than 50 beds. This information was supplemented by a more recent study in 2005. In this study, which focused on hospitals’ level of satisfaction with reprocessed SUDs, FDA received information from 102 representatives of hospitals across the nation. About 40 percent indicated they used a third party to reprocess SUDs. FDA followed up with focus groups to obtain more detailed information on the differing perspectives of various types of hospital personnel about the hospitals’ use of reprocessed SUDs. In general, participating hospitals that reported using reprocessed SUDs indicated their facilities had specific policies regarding reprocessing, used a variety of types of reprocessed SUDs, and believed that reprocessing provides substantial cost savings. In our discussions with representatives of reprocessing establishments and a managed care organization that runs several hospitals, we were told that hospitals or hospital systems generally set their own policies regarding whether to use reprocessed SUDs, which reprocessing establishment to use, and which reprocessed SUDs are acceptable to the hospitals’ physicians and other clinical personnel. This holds true for some federal hospitals as well. The Department of Defense, for example, allows individual medical facilities the option of using SUDs that are reprocessed by establishments that are registered with FDA as reprocessors. According to Department of Defense officials, as of October 2007 3 of the Navy’s 22 medical centers and hospitals reported using 4 of the Army’s 26 medical centers and hospitals reported using, or planning to use, reprocessed SUDs; and 1 of the Air Force’s 17 medical centers and hospitals reported using reprocessed SUDs. In contrast to the Department of Defense policy, the Department of Veterans Affairs has had an agencywide policy prohibiting the use of reprocessed SUDs in any of its medical centers since at least 1991. According to Department of Veterans Affairs officials, the agency could not determine whether reprocessed SUDs are safe or not. However, the agency does not allow the use of reprocessed SUDs because manufacturers did not design SUDs to be used more than once and, as a consequence, do not provide instructions on cleaning and sterilizing these devices. These officials told us that the department’s policy has remained largely unchanged, although the agency has reconsidered it at various times. FDA has taken actions, both on its own initiative and in response to legislation, to strengthen the agency’s oversight of reprocessed SUDs. These actions include (1) requiring additional premarket data submissions for 72 types of reprocessed SUDs and (2) conducting postmarket activities such as inspections of reprocessing establishments to ensure compliance with regulatory requirements and other surveillance to assess whether reprocessing is associated with an increased public health risk. FDA’s premarket oversight of reprocessed SUDs has increased, beginning with actions FDA took on its own initiative in 2000. In August of that year, FDA issued guidance that clarified its policies on the regulation of reprocessed SUDs. This guidance was directed at hospitals and third-party entities engaged in reprocessing SUDs for reuse. At the time, a sizeable minority of U.S. hospitals were thought to be reprocessing their own SUDs without FDA oversight. FDA recognized that hospitals were not likely to be familiar with its regulations, so the guidance included time frames for these reprocessing establishments to comply. According to FDA officials, the agency intended to subject each type of reprocessed SUD to the same level of premarket review as required of original SUDs. For example, if the SUD was exempt from premarket requirements before it was used for the first time, the reprocessed SUD would also be exempt. MDUFMA, enacted in 2002, directed FDA to review the premarket submission requirements for reprocessed SUDs and identify those devices for which FDA would require additional validation data to document cleanliness, sterility, and performance following reprocessing. This meant that reprocessing establishments had to submit additional premarket documentation for certain types of reprocessed SUDs to demonstrate that they remain safe and effective or substantially equivalent to another device already on the market. MDUFMA directed FDA to identify devices that fell into the following two categories and to determine whether additional information was needed to determine their continued marketability: The first category consisted of reprocessed SUDs that had been exempt from premarket notification at the time MDUFMA was enacted. For these reprocessed SUDs, FDA was required to determine whether the devices’ premarket notification exemptions should be terminated to provide reasonable assurance of their safety and effectiveness. Manufacturers of devices identified by FDA were required to provide premarket notification with validation data on cleaning, sterilization, and functional performance to ensure that the reprocessed SUDs remained safe and effective after the maximum number of reprocessing cycles. FDA, in response, identified 20 types of reprocessed SUDs that met these criteria and revoked their premarket notification exemptions. Examples of types of reprocessed SUDs that had their exemptions terminated and that were required to submit the additional validation data included noncompression heart positioners (devices intended to move, lift, and stabilize the heart during open heart surgery), nonelectric biopsy forceps (devices used to remove a specimen of tissue for microscopic examination), and various surgical devices such as specialized needles and catheters. The second category consisted of reprocessed SUDs that were already subject to premarket notification at the time MDUFMA was enacted. FDA was required to determine whether additional documentation on cleaning, sterilization, and performance was necessary to ensure that the device remained safe and effective after the maximum number of reprocessing cycles. FDA, in response, identified 52 types of reprocessed SUDs that met those criteria and required that premarket submissions for them include such data. Examples of device types that were subject to the additional validation data requirement included electric biopsy forceps, surgical drills and accessories, and oximeters (devices used to measure the level of oxygen in a patient’s blood). Appendix III summarizes FDA’s methodology for identifying the 72 types of reprocessed SUDs for which the agency has required additional premarket data submissions in accordance with MDUFMA. As part of its premarket review, FDA evaluates not only the devices themselves but the accompanying labeling and instructions for use. MDUFMA required that the labeling of all reprocessed SUDs state that the device had been reprocessed and the name of the establishment that reprocessed it. This provision took effect in January 2004 and applies to devices marketed after that date. MDUFMA and subsequent legislation also required that reprocessed SUDs or an attachment to such devices “prominently and conspicuously” bear the reprocessing establishment’s name, abbreviation, or symbol. FDA issued guidance that first became effective on August 1, 2006, to help reprocessing establishments comply with this requirement. FDA’s actions regarding its postmarket oversight of reprocessed SUDs have included (1) clarifying that SUD reprocessing establishments are subject to the same inspection requirements as other device manufacturing establishments and (2) updating reporting forms to better identify those device-related adverse event reports involving reprocessed SUDs. With the issuance of its August 2000 guidance, FDA intended to make clear its plans to subject hospitals and other third-party establishments that reprocess SUDs to FDA inspection for compliance with applicable regulatory requirements just like other establishments manufacturing medical devices. For the 11 U.S. establishments actually reprocessing SUDs as of July 2007, FDA had inspected 10 at least once during the period August 2004 through October 2007. These included multiple inspections of the 3 reprocessing establishments that industry representatives estimate to account for about 90 percent of all U.S. SUD reprocessing. FDA had not inspected 1 of the 11 reprocessing establishments. This establishment was first registered as a reprocessing establishment in 2006, and FDA officials told us that the agency plans to inspect it in 2008. We reviewed FDA summaries and other documents related to inspections conducted from August 2004 through October 2007 for the 10 inspected reprocessing establishments. For 3 establishments, none of the inspections indicated that corrective actions were needed. That is, no objectionable conditions or practices were found during the inspection. For the remaining 7 reprocessing establishments, at least one FDA inspection for those establishments during this period found that corrective actions were needed. This means that the inspection identified objectionable conditions or practices through which the establishment failed to meet either regulatory or administrative requirements. In general, in cases like these, depending upon the severity of the objectionable conditions identified, FDA determined whether the establishments could take corrective actions voluntarily, or whether conditions warranted issuance of FDA warning letters or more severe enforcement actions such as product seizures or injunctions. In the cases we reviewed that involved corrective actions, we found the following: For 6 establishments, FDA investigators determined that actions taken by the establishments were adequate to address the deficiencies identified during the establishment inspections. FDA considers these inspections to be resolved. For example, one inspection revealed that the establishment had reprocessed two models of SUDs before it received FDA approval to reprocess them. The firm stopped reprocessing these models of SUDs prior to FDA’s inspection and FDA inspectors determined that the establishment had voluntarily taken the corrective actions that were required. In another instance, FDA investigators found that the establishment had not maintained complaint files appropriately. Specifically, the establishment received a complaint from one hospital that five blood pressure cuffs reprocessed by that establishment did not function properly. However, the establishment listed all five devices as a single complaint rather than documenting each nonfunctioning device separately as required. At the end of the inspection, the establishment agreed to make each device a separate complaint rather than group several devices under one complaint number. The inspection for 1 establishment was open and under investigation as of November 2007. For this establishment, FDA inspectors identified a number of objectionable conditions, including instances in which the establishment did not adequately investigate reported problems associated with reprocessed SUDs or submit reports of device problems to FDA within the required time. In September 2007, FDA conducted a meeting with officials representing the establishment to discuss the inspection findings in detail. The establishment subsequently provided a written response to FDA containing the actions it proposed to take in order to correct the deficiencies identified by FDA investigators. FDA officials told us that the agency will not consider the inspection deficiencies to be resolved until FDA investigators reinspect the establishment. As of November 2007, FDA had not scheduled a reinspection of this establishment. MDUFMA directed FDA to modify its forms for mandatory and voluntary reporting of incidents involving devices to indicate when device-related adverse event reports involved reprocessed SUDs. Since fall 2003, FDA has included a check box in its mandatory and voluntary adverse event reporting forms to indicate whether the device associated with the adverse event was a reprocessed SUD. In addition to the change already made, an FDA workgroup is investigating whether further refinements in the device-related adverse event reporting forms, such as additional instructions, could further improve the accuracy of the adverse event reports associated with reprocessed SUDs. FDA officials told us that, while the new labeling and marking requirements for reprocessed SUDs, as well as the updated reporting forms, may eventually enhance their ability to identify device-related adverse event reports involving reprocessed SUDs, as of July 2007, agency officials had not detected an appreciable change in the reports submitted involving reprocessed SUDs. While FDA has made changes to its data collection process regarding reprocessed SUD-related adverse events, the data are not suitable for a rigorous comparison of the safety of reprocessed SUDs relative to original SUDs of the same type on their initial use. Such a comparison would require collecting additional data such as the type of device and adverse event and the number of original and reprocessed SUDs of that type in use. The limited number of peer-reviewed studies related to reprocessing that we identified were insufficient to support a comprehensive conclusion on the relative safety of reprocessed SUDs. Despite the limitations of available data, FDA’s analysis of reported device-related adverse events does not show that reprocessed SUDs present an elevated health risk. While FDA’s database of device-related adverse events is designed to provide information about trends such as infection outbreaks or common user error caused by inadequate instructions, it is not comprehensive. That is, the system cannot generate sufficient data on device performance that would be required to compare the safety of reprocessed SUDs with either original SUDs on their initial use or to other devices in general. Such a study, at a minimum, would require data that would identify the type of device and adverse event, the number of original and reprocessed SUDs of that type in use, the number of times each reprocessed SUD was used, and the rate of adverse events associated with the original devices. FDA officials, including the Director of the Center for Devices and Radiological Health, have described the effort that would be required and acknowledged the shortcomings of the current adverse event reporting system to generate comparative safety data. FDA officials indicated to us, however, that such studies would not be an efficient use of agency resources given the existing level of FDA oversight. To supplement our review of the safety information developed and analyzed by FDA, we conducted a review of the scientific literature related to SUD reprocessing published in peer-reviewed journals since 2000. We identified six studies that addressed the safety of reprocessed SUDs. On examination, none of the six studies were comprehensive enough to support an overall conclusion about the relative safety of reprocessed SUDs compared to SUDs on their initial use. They were limited in that they tested relatively few devices, and the reprocessing establishments had not been inspected by FDA. FDA has reviewed available adverse event reports associated with reprocessed SUDs and has not identified a causative link between the adverse event and the fact that the devices involved were reprocessed. In September 2006, the Director of FDA’s Center for Devices and Radiological Health testified that based on available adverse event data, FDA had identified 434 reports submitted from October 2003 to July 2006 in which reprocessed SUDs were identified on the reporting form. With respect to these reports, FDA determined that the majority of the reports, including all 15 of the reports involving deaths, did not involve a reprocessed SUD. For example, FDA determined that many of the reported events involved reusable devices such as magnetic resonance imaging machines or SUDs on their initial use. Of the 434 reports, FDA further reviewed the 65 events that it found actually involved or were suspected to involve a reprocessed SUD and that the reprocessed SUD was one of several possible causal factors in the adverse event. In reviewing these 65 reports, FDA found that the types of adverse events reported to be associated with the use of reprocessed SUDs were the same types of events that are reported for new, nonreprocessed devices. In 2005, FDA consulted hospitals participating in the agency’s Medical Product Safety Network (MedSun) about their experiences, including adverse events or safety concerns, with reprocessing. None of the representatives of MedSun hospitals who participated in the FDA focus groups reported being aware of any infections related to the use of reprocessed SUDs. However, hospital representatives noted that if an infection occurred, it would be very difficult to discern if a reprocessed SUD was the cause. Similarly, none of the hospital representatives expressed significant concerns about potential malfunctions with reprocessed SUDs, even though some of them indicated that malfunctions of reprocessed SUDs occurred on occasion (for example, surgical blades and other tools sometimes may not have been sharpened properly). Overall, however, participating hospital representatives generally expressed confidence in reprocessed SUDs, with some participants stating that there were actually fewer performance problems with reprocessed SUDs than with new SUDs. According to FDA, all participants believed that reprocessing establishments are more stringently regulated by FDA than are the manufacturers of the original devices, and this provided them a sense of confidence in the reprocessing process. After reviewing the available evidence—including FDA’s process for identifying and investigating device-related adverse events reported to involve reprocessed SUDs, peer-reviewed studies published since 2000, and the results of our and FDA’s consultations with hospital representatives—we found no reason to question FDA’s analysis indicating that no causative link has been established between reported injuries or deaths and reprocessed SUDs. That is, the available information regarding safety, while not providing a rigorous safety comparison between reprocessed SUDs and other devices, does not indicate that reprocessed SUDs currently in use pose an increased safety threat. In commenting on a draft of this report, HHS provided language to clarify several sentences which we generally incorporated. We also incorporated HHS’s technical comments as appropriate. HHS’s written comments appear in appendix V. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Commissioner of FDA, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. GAO staff who made major contributions to this report are listed in appendix VI. To address the report objectives, we (1) reviewed relevant laws, regulations, and agency guidance; (2) interviewed Food and Drug Administration (FDA) officials, representatives of professional associations of manufacturing establishments, and the Association of Medical Device Reprocessors (AMDR); (3) interviewed officials from a provider association, private hospitals, and the Departments of Defense and of Veterans Affairs regarding their policies on the use of reprocessed single-use devices (SUD); and (4) reviewed FDA data, market research, and peer-reviewed studies. We conducted our work between November 2006 and January 2008 in accordance with generally accepted government auditing standards. We consulted a variety of sources, including FDA officials who track industry trends, professional associations representing device manufacturers and reprocessing establishments, and hospitals. We found that neither industry nor FDA representatives were able to provide comprehensive information on the number and volume of devices manufactured for the United States, or on the subset of devices that are SUDs or reprocessed SUDs. To determine the number of reprocessing establishments, we reviewed FDA data on the number of registered reprocessing establishments. FDA data indicated that more than 40 establishments were registered as reprocessing establishments as of March 2007, including 13 located outside the United States. After we determined that the FDA list did not match information provided by two FDA district offices, FDA officials determined that many of the establishments had registered as reprocessing establishments in error and subsequently identified 11 establishments in the United States that, as of July 2007, were engaged in reprocessing SUDs. We determined FDA’s information on the number of establishments reprocessing SUDs in the United States as of July 2007 was sufficiently reliable for our purposes. However, given the errors in the FDA list of registered reprocessing establishments in 2007 and the lack of information on foreign establishments registered as reprocessors, we determined that FDA’s data were not sufficiently reliable to determine the number of establishments reprocessing SUDs prior to July 2007 or the number of foreign reprocessing establishments at any time. As a result, we were unable to analyze trends in the number of reprocessing establishments or the types of devices being reprocessed since 2000, and we were limited to reporting on domestic reprocessing establishments. Regarding the types of SUDs being reprocessed, our ability to provide precise information was limited because although FDA maintains databases of the types of devices the reprocessing establishments listed with FDA, it does not confirm that all listed devices are currently available. As a result, FDA’s data may include types of SUDs that the reprocessing establishments no longer reprocess, types of SUDs they plan to reprocess, or types of SUDs they listed in error—in effect, overstating the types of SUDs the establishments are reprocessing or plan to reprocess. In addition, representatives of one reprocessing establishment identified one device type listed in the FDA database that the establishment never reprocessed, but only resterilized and repackaged in unused form. While we were unable to determine their reliability, we used FDA’s data listing the types of SUDs being reprocessed for the limited purpose of portraying the types of SUDs that the reprocessing establishments were reprocessing or planned to reprocess as of July 2007. To determine available research published about the safety of reprocessed SUDs since we last reported on the topic in 2000, we reviewed FDA documents related to adverse events involving reprocessed SUDs and an FDA-sponsored survey of the experience of some hospitals related to SUDs, reviewed summaries of, and other documents related to, FDA inspections of reprocessing establishments conducted from August 2004 through October 2007, and conducted a literature search of studies (which we call articles) published in peer-reviewed journals from January 2000 through January 2007. We performed the literature review of peer- reviewed articles by searching the following databases: BIOSIS, EMBASE, Medline, ProQuest, and the Science Citation Index. Of the more than 30 articles located through the literature search, we identified a total of 6 articles that were published in peer-reviewed journals and that addressed the safety of reprocessed SUDs. These articles are listed below: Colak, T.; Ersoz, G.; Akca, T.; Kanik, A.; Aydin, S. “Efficacy and Safety of Reuse of Disposable Laparoscopic Instruments in Laparoscopic Cholecytectomy: A Prospective Randomized Study.” Surgical Endoscopy 18, no. 5 (2004): 727–731. daSilva, M.; Ribeiro, A.; Pinto T. “Safety Evaluation of Single-Use Devices After Submission to Simulated Reutilization Cycles.” Journal of AOAC International 88, no. 3 (2005): 823–829. Fedel, M.; Tessarolo, F.; Ferrari, P.; et al. “Functional Properties and Performance of New and Reprocessed Coronary Angioplasty Balloon Catheters.” Journal of Biomedical Materials Research 78, no. 2 (2006): 364–372. Lipp, M.; Jaehnichen, G.; Golecki N.; et al. “Microbiological, Microstructure, and Material Science Examinations of Reprocessed Combitubes® After Multiple Reuse.” Anesthesia & Analgesia 91 (2000): 693–397. Roth, K.; Heeg, P.; Reichl, R. “Specific Hygiene Issues Relating to Reprocessing and Reuse of Single-Use Devices for Laparascopic Surgery.” Surgical Endoscopy 16, no. 7 (2002): 1091–1097. Wilson, S.; Everts, R.; Kirkland, K.; et al. “A Pseudo-Outbreak of Aureobasidium Species Lower Respiratory Tract Infections Caused by Reuse of Single-Use Stopcocks During Bronchoscopy.” Infection Control and Hospital Epidemiology 21, no. 7 (2000): 470–472. On examination, none of these studies were comprehensive enough to support an overall conclusion about the relative safety of reprocessed SUDs compared to SUDs on their initial use. Several limitations in the articles we identified through our literature review make it difficult to support an overall statement comparing the safety of reprocessed SUDs with the safety of other devices. These limitations include the following: Five of the six articles described studies that were conducted outside of the United States, so we could not determine whether the reprocessing methods and facilities would have met FDA’s approval. The remaining article, while conducted in the United States, was published prior to MDUFMA’s enactment in 2002 and subsequent FDA actions to implement new requirements. The articles reported on studies that tested few types of devices. Because each study used different types of devices, it is not possible to compare and aggregate their results to support general conclusions regarding the relative safety of reprocessed SUDs. No action indicated No action indicated Corrective action indicated Corrective action indicated n.a. Years of Inspections conducted from August 2004 through Device types indicate all devices assigned to a distinct product code by FDA. Each device type may include a variety of actual instruments, manufacturers, and models. For example, some device types include the device itself, such as a powered saw, and its accessories. The establishment first registered as a reprocessing establishment in 2006; as of July 2007 no inspections had been conducted but FDA officials reported plans to inspect the establishment in 2008. The Medical Device User Fee and Modernization Act of 2002 (MDUFMA) required the Food and Drug Administration (FDA) to identify reprocessed single-use devices (SUD) that should be subject to additional premarket data submission requirements to ensure their safety and effectiveness. To identify these reprocessed SUDs, FDA analyzed the risks of infection or inadequate performance for 229 types of SUDs that the agency identified as either actually or potentially being reprocessed. For purposes of implementing MDUFMA, FDA took into account such factors as the physical characteristics of each type of SUD, including coatings that could be damaged by reprocessing, the type of contamination associated with the type of SUD’s intended use, and the severity of potential injuries that could result if that type of SUD fails after reprocessing. FDA published the results of its review in a series of Federal Register Notices between April 2003 and September 2005. These devices were either: (1) previously exempt from premarket notification and have had their exemptions revoked, and now also require validation data on cleaning, sterilization, and functional performance; or (2) already subject to premarket notification and now also require the additional validation data. Reprocessing establishments that did not provide the required premarket notification and validation data by the deadlines established in these notices could no longer legally market those devices. Figure 1 summarizes the results of FDA’s review in chart form. As of May 30, 2007, FDA had received a total of 6 premarket notification submissions with additional validation data for 2 types of reprocessed SUDs that had their exemptions revoked following enactment of MDUFMA. Of these 6 submissions, 4 were cleared by FDA and 2 were pending as of May 30, 2007. FDA also received 88 submissions of premarket validation data for 16 types of reprocessed SUDs that had not been exempt at the time MDUFMA was enacted but that were subsequently required to submit additional validation data. Of these 88 submissions, 74 were cleared by FDA, 4 were found not substantially equivalent and therefore not marketable, and 10 were either withdrawn or pending as of May 30, 2007. The Food and Drug Administration’s (FDA) reporting framework for device-related adverse events includes both mandatory and voluntary components, depending on who is doing the reporting. Under FDA’s Medical Device Reporting (MDR) regulation, device user facilities (including hospitals and other providers) and manufacturers (including reprocessing establishments) must report deaths and serious injuries that a device has caused or may have contributed to. User facilities must report deaths to FDA and the manufacturer, and serious injuries to the manufacturer, if known, otherwise to FDA, whenever they become aware of information that reasonably suggests that a device has or may have caused or contributed to the death or serious injury of a patient. Manufacturers must report device-related deaths and serious injuries to FDA whenever they become aware of information that reasonably suggests that one of their devices has or may have contributed to the event. Manufacturers are also required to submit device malfunction reports to FDA whenever they become aware of information that reasonably suggests that one of their marketed devices has malfunctioned and that the device or a similar device marketed by the manufacturer would be likely to cause or contribute to a death or serious injury if the malfunction were to recur. See table 1 for a summary of MDR mandatory reporting requirements. In addition to its mandatory reporting component, FDA also has a voluntary component for reporting device-related adverse events, known as FDA’s MedWatch program. Health care professionals can voluntarily report serious adverse events, product quality problems, or product use errors that they suspect are associated with the devices they prescribe, dispense, or use. Consumers and others can also voluntarily report adverse events, product use errors, or quality problems, that they suspect are associated with the use of a device. In addition to the contact named above, Kim Yamane, Assistant Director; Matt Byer; Julian Klazkin; Suzanne Rubins; Stan Stenersen; and Jennifer Wiley made key contributions to this report. Food and Drug Administration: Methodologies for Identifying and Allocating Costs of Reviewing Medical Device Applications Are Consistent with Federal Cost Accounting Standards, and Staffing Levels for Reviews Have Generally Increased in Recent Years. GAO-07-882R. Washington, D.C.: June 25, 2007. Food and Drug Administration: Limited Available Data Indicate That FDA Has Been Meeting Some Goals for Review of Medical Device Applications. GAO-05-1042. Washington, D.C.: September 30, 2005. Single-Use Medical Devices: Little Available Evidence of Harm From Reuse, but Oversight Warranted. GAO/HEHS-00-123. Washington, D.C.: June 20, 2000. Adverse Events: Surveillance Systems for Adverse Events and Medical Errors. GAO/T-HEHS-00-61. Washington, D.C.: February 9, 2000.
Within the Department of Health and Human Services (HHS), the Food and Drug Administration (FDA) is responsible for reviewing the safety and effectiveness of medical devices. The decision to label a device as single-use or reusable rests with the manufacturer. To market a reusable device, a manufacturer must provide data demonstrating to FDA's satisfaction that the device can be cleaned and sterilized without impairing its function. Alternatively, a single-use device (SUD) may be marketed without such data after demonstrating to FDA that the device is safe and effective if used once. Even though labeled for single-use, some SUDs are reprocessed for reuse with FDA clearance. This report addresses (1) the SUD reprocessing industry--the number of reprocessing establishments, the types of devices reprocessed, and the extent to which hospitals use reprocessed SUDs, (2) the steps FDA has taken to strengthen oversight of reprocessed SUDs, both on its own and in response to legislative requirements, and (3) the safety of reprocessed SUDs compared with other types of medical devices. GAO reviewed FDA data on reprocessors, reprocessed SUDs, and device-related adverse events, as well as FDA documents and inspection reports, studies published in peer-reviewed journals, and relevant statutes and regulations. GAO interviewed FDA officials and officials from associations of manufacturers, reprocessors, and providers. FDA has information on domestic reprocessing establishments, but it does not have data on the extent of actual production or on where the devices are being used. FDA officials identified 11 establishments that reported planning to market or actively marketing more than 100 types of reprocessed SUDs in the United States as of July 2007. Reprocessed SUDs ranged from devices used external to the body, such as blood pressure cuffs, to surgical devices used to repair joints. While many hospitals were believed to be reprocessing their own SUDs in 2000, FDA identified only one hospital in 2007 that was reprocessing SUDs. Reprocessed SUDs are being used in a variety of hospitals throughout the nation, including military hospitals. However, the Department of Veterans Affairs, which operates one of the nation's largest health care systems, prohibits their use entirely. Since 2000, FDA has taken a number of steps--on its own and in response to legislation--to enhance its regulation of reprocessed SUDs both before they go to market (called premarket review) and afterwards (called postmarket oversight). In 2000, FDA published guidance that clarified its policies on the regulation of reprocessed SUDs. This guidance was directed at third-party entities and hospitals engaged in reprocessing SUDs for reuse. Following legislation passed in 2002, FDA imposed additional requirements for about 70 types of reprocessed devices and implemented new labeling requirements so that users would recognize those devices that had been reprocessed. In terms of postmarket review, FDA now inspects reprocessors and monitors reports of adverse events involving reprocessed SUDs. Seven of the 10 reprocessing establishments that FDA inspected in the last 3 years had problems requiring corrective actions. Regarding adverse event reporting, FDA modified its reporting forms in 2003 to enable FDA to better identify and analyze those adverse events involving reprocessed SUDs. Neither existing FDA data nor studies performed by others are sufficient to draw definitive conclusions about the safety of reprocessed SUDs compared to similar original devices. While FDA has made changes to its data collection process regarding reprocessed SUD-related adverse events, the data are not suitable for a rigorous comparison of the safety of reprocessed SUDs compared to similar original SUDs. The other studies published since 2000 that GAO identified are likewise insufficient to support a comprehensive conclusion on the relative safety of reprocessed SUDs. FDA officials have concluded that the cost of conducting rigorous testing would not be an efficient use of resources, especially given that the available data, while limited, do not indicate that reprocessed SUDs present an elevated health risk. FDA has analyzed its data on reported adverse events related to reprocessed SUDs and has concluded that there are no patterns that point to these devices creating such risks. After reviewing FDA's processes for monitoring and investigating its adverse event data, we found no reason to question FDA's analysis. HHS provided language to clarify several sentences of a draft of this report which GAO generally incorporated.
The federal government funds a wide array of programs intended to provide benefits or services to low-income individuals, families, and households. The 12 programs included in this review include the largest of these programs, Medicaid, as well as some relatively small programs such as WIC. The 12 programs are administered by seven different federal agencies. Table 1 shows the agencies responsible for each program and federal expenditures for fiscal year 2003, the most current year available. These programs provide different benefits and services to different target populations. Some of the programs provide benefits that phase out gradually, so some people may be eligible for a very low benefit. For example, in fiscal year 2005, monthly food stamp benefits for a three- person household can be as low as $1 per month. Other programs, such as Medicaid, generally offer comparable benefits to every resident of a state who meets eligibility requirements. Table 2 provides a brief description of each of the programs covered in this report and the types of benefits offered by each program. The programs covered by this review represent both entitlement and non- entitlement programs; programs that are administered entirely by federal agencies and those that are administered through federal partnerships with state and/or local agencies; programs that allow substantial state and local variation and those that provide uniform benefits throughout the country. Table 3 summarizes the level of government with whom responsibility for funding and design resides for each of the 12 programs. Although state and local agencies responsible for administering several of the programs have some control over program design and implementation, primary responsibility for setting priorities, guiding policy, and measuring performance rests with the federal agencies that oversee the programs. The programs covered by this review may have many different goals and objectives of varying degrees of importance, but all of these programs— regardless of size, target population, funding structure, or types of benefits offered—were established to assist persons with limited income, and as such, all were tasked with the overarching goal of reaching and serving those eligible for program benefits or services. Likewise, all means-tested programs share the common goal of ensuring that the funds allocated for program benefits are provided only to those eligible. A key factor in achieving desired program outcomes, including program access and program integrity, is the implementation of appropriate internal control. As discussed in our prior work on this topic, internal control is an integral component of an organization’s management that provides reasonable assurance that agencies are achieving outcomes related to the effectiveness and efficiency of operations and compliance with laws and regulations, among other things. Federal agencies have several key management tools and program reports that they use to provide information about the programs they administer, develop policy, help with management decision-making, and help focus program administrators at all levels of government on achieving the federal agency’s highest priorities. For example, through their strategic and annual performance plans and accountability reports, federal agencies set program goals, measure program performance against those goals, and report publicly on their progress. Some programs also issue annual or biennial reports to Congress on specific programs they administer; these generally include important program information. A number of governmentwide initiatives have resulted in increased emphasis by program managers on program integrity issues, including OMB guidance, the President’s Management Agenda, GAO’s High-Risk series and the Improper Payments Information Act of 2002 (IPIA). The IPIA requires the head of each federal agency to annually review all programs and activities that the agency administers and to identify all such programs and activities that may be susceptible to significant improper payments. For each program and activity identified, the agency is required to estimate the annual amount of improper payments and submit those estimates to Congress before March 31 of the following applicable year. OMB guidance then directs federal agencies to include a measure of improper payments in their annual Performance and Accountability Reports. All 12 programs we reviewed are subject to these requirements. The proportion of those eligible who are actually enrolled in 12 selected low-income programs varies substantially both between and within programs, but several factors must be considered to understand the implications of this information for program access. The estimated proportion of eligible people who were enrolled in entitlement programs— those designed to support all those who apply and qualify—ranged from about 50 percent to more than 70 percent. In contrast, the estimated proportion of the eligible who were enrolled in non-entitlement programs—those with limited funding and not necessarily intended to cover all eligible persons—ranged from less than 10 percent to about 50 percent. Within programs, we found that subpopulations were enrolled in different proportions. Some evidence also suggests that enrollees in some programs, such as the Food Stamp Program, tend to be those who are eligible for larger benefit amounts. For four of the five entitlement programs we examined, the proportion of eligible people who were enrolled varied from around 50 percent in the Food Stamp Program to more than 70 percent in the EITC and SSI programs; and within programs, different types of participants, such as children or the elderly, were enrolled in varying proportions. Table 4 provides estimated participation rates for each of these programs, expressed in ranges to reflect potential errors in sample survey data. The actual participation rates may be outside the range of estimates shown because some types of uncertainty in the estimation of participation rates cannot be quantified. Estimates for the Food Stamp, Medicaid and SSI programs were developed by the Urban Institute under contract with HHS and Food Stamp Program estimates were developed by Mathematica Policy Research, Inc., under contract with USDA. HHS and USDA are the only two agencies to regularly estimate participation rates in low-income programs. We estimated the EITC participation rate for 1999. However, we were unable to estimate an enrollment rate for the Pell Grant program because of concerns about the reliability of some of the data needed to estimate the rate. These estimates indicate that between half and three quarters of those eligible are participating in four of the entitlement programs. Although differences in years, data sources, and estimation methodologies make it inappropriate to compare participation rates across programs, these estimates provide a general sense of the extent to which programs are reaching those eligible for benefits. Specifically, we found: For the EITC program, we estimated in prior work that about 75 percent of eligible households took advantage of this credit in 1999. This estimate was adjusted to account for tax dollars that were distributed in error that year. However, Congress has enacted new tax laws and the IRS has taken steps to improve EITC program compliance. Two organizations, the Urban Institute and Mathematica Policy Research, Inc., have estimated food stamp participation rates using different methodologies but both found that about 48 percent of eligible households participated in this program in 2001 and 2002, respectively. In addition, Mathematica found that 54 percent of eligible individuals were enrolled in 2002. An estimated 66 to 70 percent of eligible, noninstitutionalized people were enrolled in Medicaid in 2000. This estimate does not include institutionalized people, such as those in nursing homes. We were unable to estimate Pell Grant participation rates because we were unable to assess the reliability of available data on the family income of enrolled students who did not apply for federal financial aid. For SSI, an estimated 66 to 73 percent of eligible adult individuals and married couples received benefits in 2001. This estimate also does not include institutionalized people or disabled children. Participation rates also can vary substantially even within a single program. Disaggregating program participation rates by targeted subpopulations can help us to better understand the implications of overall program participation rates. In table 5, we provide participation rate estimates for selected subgroups within the four entitlement programs for which we have participation rate estimates. As in table 4, these estimates are expressed in ranges to reflect potential errors that result from using sample survey data. (See the footnotes to table 5 for more details.) As shown, participation in low-income programs varies markedly across subgroups. For example, a smaller share of elderly people eligible for low income programs tend to participate in the Food Stamps and Medicaid programs than among the total eligible population. The elderly may participate in the Food Stamp Program at a lower rate than other households because most elderly households receive Social Security and are eligible for relatively small food stamp benefits. For example, in 2000, 44 percent of all households with elderly members eligible for food stamps were eligible for a monthly benefit of only $10 or less, the minimum benefit for households of one or two persons. In comparison, 12 percent of households without elderly members eligible for food stamps receive benefits that low. Similarly, Medicaid participation among the elderly may be low because Medicare also covers many elderly people who are eligible for Medicaid. In contrast, families with children tend to participate in food stamps, Medicaid, and the EITC at higher than average rates than those households without children. Although we were unable to disaggregate participation rates by geographic area, program participation rates can also vary by state or locality. Information on trends in program participation rates over time can also help in interpreting participation rate estimates, but we were only able to provide information on trends in participation for one program. Mathematica Policy Research, Inc., estimates food stamp participation rates annually under contract with USDA and has taken steps to ensure that estimates are comparable over time. As shown in figure 1, food stamp participation rates remained fairly stable between fiscal years 1999 and 2002, declining slightly from 52 percent in fiscal year 1999 to 48 percent in fiscal year 2002 among eligible households and 56 percent in fiscal 1999 to 54 percent in fiscal year 2002 among eligible individuals. We were not able to provide information on trends in participation rates for the Medicaid and SSI programs because the most recent participation rate data available for these programs is not comparable to prior year estimates. While prior year estimates are available, because of changes from 1 year to the next in the methodology used to estimate participation rates, estimates for prior years are not perfectly comparable with the most recent estimates and are therefore not shown in this report. The participation rate estimate for the EITC was available for only 1 year. Among the seven non-entitlement programs we reviewed, the share of those eligible who participate ranges from less than 10 percent to about 50 percent. These programs are generally not funded to serve all eligible applicants, and many have other design features, such as the prioritization of certain subgroups and eligibility criteria specific to a state or locality, that would not necessarily allow for all eligible people who apply to receive benefits. Consequently, we refer to these estimates as coverage rates rather than participation rates. Table 6 provides estimated coverage rates for each of these programs, expressed in ranges to reflect potential errors in survey data. The actual coverage rates may be within a broader range of estimates than shown because other errors that may have resulted from calculating these estimates could not be quantified. Estimates for the CCDF, SCHIP, and TANF cash assistance programs were developed by the Urban Institute under contract with HHS. We also include a WIC estimate developed by the National Research Council, and another WIC estimate developed by the Urban Institute under contract with us. We estimated the Head Start and housing programs using administrative and national survey data. Generally less than half of those eligible participate in the seven non- entitlement programs. Coverage rates also have limitations such as data sources, and estimation methodologies that make it inappropriate to compare the estimates across programs. However, they provide a general sense of the extent to which these programs are reaching eligible people. Specifically, we found: Almost 20 percent of children who meet state-defined eligibility criteria are receiving child care services through CCDF. This estimate does not reflect participation in child care funded by other federal sources. HHS estimates that about 26 percent of CCDF-eligible children are receiving child care services through either CCDF (including the Child Care and Development Block Grant, state CCDF funds or TANF transfers to CCDF), or child care services funded directly through the Social Services Block Grant (SSBG), TANF, or TANF state funds. The Head Start program funded enough slots to serve about 44 to 54 percent of eligible 3- to 4-year-old low-income children in 2003. An estimated 13 to 15 percent of households eligible for Housing Choice Vouchers on the basis of income both received a voucher and were able to successfully lease a housing unit in 1999, the most recent year for which these data were available. In addition, less than 10 percent of households eligible on the basis of income were served through the Public Housing program in 1999. While HCV and public housing are the two largest federal housing programs, there are other federal, state, and local housing programs from which eligible households could receive assistance. HUD estimated that in 1999 about a quarter of all households eligible for any kind of housing assistance received assistance. In 2000, about 44 to 51 percent of eligible children participated in SCHIP. Since 2000, SCHIP enrollment has increased by nearly 3 million children, but the impact this has had on the coverage rate is not known because information on how the eligible population may have changed over this time period is not available. In addition, because states’ implementation of their programs varied, awareness of SCHIP may lag in states that created their programs more recently. About half of all eligible households receive cash assistance through TANF. This estimate does not account for families that receive other services, such as transportation and child care, that are offered through this program, but who do not receive cash assistance. WIC participation rates were available from two sources. The National Research Council using the Survey of Income and Program Participation found that 51 percent of eligible infants, children ages 1 to 4 and pregnant women were enrolled in WIC in 1998. The Urban Institute, using the Current Population Survey, found that about 51 to 55 percent of eligible infants and children participated in 2001. Coverage rates for these programs can also vary substantially by subgroup, and they do vary within the TANF and WIC programs, the only non-entitlement programs for which we have this information. Understanding the extent to which different subgroups are covered by non-entitlement programs can be particularly important precisely because these programs are not necessarily funded to cover all those eligible. Table 7 provides coverage rates for groups of people eligible for TANF cash assistance and WIC. As shown, families with no income earners tend to receive TANF cash assistance at a higher rate in the TANF program than families with earners or two-parent families. The data also suggest that families with immigrants have a lower coverage rate than families without immigrants. As noted earlier, some families within these subpopulations may be receiving other services funded by TANF, although data are not available on this. The participation rate among infants eligible for WIC is higher than among children ages 1 to 4. We were unable to provide estimates of changes in coverage rates over time for most of the non-entitlement programs covered in this review, but we did find an increase in the coverage rate for the Head Start program over the past several years. Since 1997, Head Start coverage rates have increased from about 40 percent to 50 percent. However during this time period, the coverage rate increased between 2000 and 2001 from about 50 percent to 58 percent and then gradually declined. The coverage rate increased by this amount within 1 year because slots increased by nearly 40,000 for the Head Start program while the number of children eligible decreased by about nearly 130,000. Since 2001 the number of funded Head Start slots has increased, but the coverage rate fell because the child poverty rate, and thus the number of eligible children, increased. Figure 2 shows Head Start coverage rate estimates for 1997 to 2003. Because of changes in the methodology used to estimate coverage rates from 1 year to the next, we cannot reliably compare the most recent coverage rate estimates for SCHIP and TANF to those of prior years, but there is compelling evidence that coverage rates for these two programs changed significantly over time. SCHIP was first implemented in 1997 and the number of children in the program grew from 660,351 in fiscal year 1998 to over 3 million in fiscal year 2000. As states continued to reach out to eligible children in subsequent years, participation continued to increase to nearly 6 million in fiscal year 2003. This eightfold increase in the number of children enrolled in SCHIP since fiscal year 1998 almost certainly had a significant impact on the coverage rate. Meanwhile, as states implemented welfare reforms during the strong economy of the late 1990s and other changes occurred in programs serving low-income families, the number of families receiving TANF cash assistance declined, falling by about 50 percent between fiscal years 1997 and 2003. This also likely had a significant impact on the program coverage rate. HHS has reported that about 70 percent of families eligible for TANF cash assistance were enrolled in the program in 1997, compared to less than 50 percent in 2001, but we were unable to quantify how much of this change was caused by changes in estimation methodology. It is also important to note that states’ TANF programs have changed over this time. States have more flexibility in the types of non-cash assistance they may provide families, and some families eligible for cash assistance may be receiving other forms of aid instead. In addition, some families or individuals that may appear eligible according to the model may not be participating because they have not complied with program requirements, such as being involved in work activities. Our estimates of the costs of providing benefits to eligible non-participants show that for some programs, those currently participating in the programs are generally eligible for a greater benefit amount than those not participating. For these programs, the estimated costs of serving those eligible but not currently receiving benefits may be less than one might expect based on the participation or coverage rates we estimated. This is not surprising, as it makes sense that, for example, a person eligible for a $10 food stamp benefit might be more likely to forgo that benefit than a person eligible for a $150 benefit. More specifically, Mathematica found that the 48 percent of households participating in the Food Stamp Program in fiscal year 2001 received about 62 percent of the total amount of benefits that would be paid out if all eligible households received benefits. As shown in table 8, we were able to estimate the additional cost of providing benefits to eligible nonparticipants for 6 of the 12 programs we reviewed. These cost estimates take into consideration the characteristics of non-participants and the benefits for which they would be eligible but do not account for increases in administrative costs that could result from participation increases. We were unable to estimate the cost of providing benefits to eligible, non-participants in the six other programs—CCDF, HCV, Medicaid, Pell Grants, Public Housing, and SCHIP—because we did not have enough information to determine how the characteristics of the non-participants would have affected the benefit amounts they could have received. Although Medicaid is by far the largest program we reviewed, developing reliable estimates of the costs of serving those eligible but not enrolled in the Medicaid program is difficult for a number of reasons. These reasons include a lack of information about the health status of eligible non- participants, the many variables that affect health care costs, and the open-ended nature of health care benefits. The costs included in the table do not reflect the total costs that would be incurred if all eligible people enrolled in these programs. Probable increases in administrative costs that would be incurred are not included, and we did not estimate how the cost of goods and services offered through the Food Stamp, Head Start, and WIC programs would change if more participants were demanding them. However, the costs do include some additional costs that would be incurred by states in programs where states are required to cover some of the program costs. Recognizing the differences in goals, design, administration, and funding in the 12 programs, it may be neither feasible nor desirable to provide program benefits to all those eligible. Because alternative programs could offer the similar benefits and services as the programs we reviewed, some of these programs, especially those with limited funding, may be best administered by only serving a certain subgroup of their eligible population. For example, in localities where school systems offer pre- kindergarten classes to 4-year-old children, Head Start programs may want to focus more on enrolling 3-year-olds from low-income families. Also, limited administrative resources may prevent federal, state, and local agencies from serving all those eligible. Full participation among eligible people also may not be feasible because some choose not to enroll in means-tested programs. Given these constraints, some programs may focus on targeting their resources to those most in need of program services. Finally, while participation and coverage rate estimates as well as estimates of potential program costs provide important program information that can be useful to both policymakers and program administrators, these estimates must be interpreted carefully to take into consideration such factors as survey data, research methodology, timeliness of estimates, and the availability of alternative programs. These factors limit our ability to compare rates across programs and would need to be considered in efforts to define appropriate or desired participation or coverage rate levels. For a detailed discussion of the factors that affect use of these estimates, see appendix II. Many factors influence access to means-tested programs, including the benefits provided by the program, ease of access, misperceptions about program requirements, and eligibility verification procedures put in place to ensure program integrity, based on our literature review and agency interviews. These factors influence participation by affecting the number of people who participate in a program as well as the type of people who participate. Entitlement programs may focus on addressing factors that affect access in order to increase program participation. However, because coverage rates in non-entitlement programs are determined primarily by funding levels, non-entitlement programs are more likely to be concerned about the impact of these factors on other aspects of program access, such as the composition of the program caseload and how the program interacts with alternative programs that provide similar benefits. The size and type of program benefits can affect whether or not an individual participates in a program. Numerous studies show that the size or value of the benefit influences program participation, and this depends largely on the structure of program benefits. Some programs, such as the Food Stamp Program, determine benefit levels based on income, allowing a three-person household to receive as little as $1 per month depending on household income. Other programs, such as Medicaid, generally offer comparable benefits to every resident of a state who meets eligibility requirements. A recent National Bureau of Economic Research study showed that across a range of programs and within programs, larger benefits were associated with higher participation and coverage rates. In addition, our literature review, site visits, and analysis of participation rates for selected programs showed that participation and coverage rates tend to be higher among subgroups eligible for larger benefits. For example: Elderly individuals are generally eligible for small food stamp benefits; their participation rate in the program is lower than for other groups. Families with infants participate in WIC at higher rates than families with older children, in part because the value of the WIC voucher is greater for families with infants. WIC officials in Connecticut reported to us that 10 percent of families stopped participating in the program when their youngest child reached age 1. On average, families with children who are eligible for the EITC receive a higher EITC benefit than individuals and families without children, and they participate at a higher rate. The type of benefits offered can also influence whether someone eligible for a program participates. Some programs—such as EITC and SSI—offer cash benefits while other programs—such as Head Start and Medicaid— offer direct services, such as educational or health services. Our research on participation in low-income programs generally shows that participation and coverage rates are higher among programs that provide cash benefits than among programs that provide direct services. One reason for this difference is the flexibility cash benefits give to individuals. Although the recipient may need the services provided by programs in which they are enrolled, these types of benefits do not allow individuals to shift resources away from program purposes toward what they might consider more pressing needs. Our site visits confirmed that the type of benefits influence participation and coverage rates. For example, we found that: SSI recipients who participated in a pilot project that allowed them to receive their food stamp allotment in cash expressed a strong preference for this over traditional food stamp coupons. Some individuals who need subsidized housing may prefer the Housing Choice Voucher program, which allows individuals to choose their housing in neighborhoods that offer better educational and employment opportunities or choose to remain in a place while paying less rent, over the Public Housing program that does not give the individuals flexibility to choose where they live. Factors that influence the ease with which potential participants can access a program—including office hours, program location, and the ability of program participants to redeem or use their benefits—can also affect the number and groups of people who participate in the programs. Several of the programs we reviewed require applicants to visit the program office to establish and maintain eligibility. For instance, local WIC, TANF, and Food Stamp Program offices typically require face-to-face interviews before individuals can receive benefits. Those programs that keep traditional office hours—8:00 a.m. to 5:00 p.m.—pose a barrier to potential applicants who work and would have to take time away from their job in order to apply. Studies on child care and WIC programs identify traditional office hours as a barrier for working families. Similarly, Medicaid officials in California told us that one of their barriers to participation is office hours that do not accommodate working families. Although many program officials suggested that having flexible office hours is important to participants, some offices are challenged to extend their operating hours. WIC officials in Connecticut told us that they have attempted to promote extended hours and have made some progress; however, instituting extended hours remains problematic due to limited funding and contracts governing the workforce. The location of the program office can also affect participation and caseload composition. For many individuals—like the elderly, individuals with disabilities, and families living in rural areas—traveling to a program office that is outside the limits of their available mode of transportation makes it difficult for them to receive needed benefits. The USDA published in its National Survey of WIC Participants that one of the top barriers in the WIC program is transportation with 30 percent of participants reporting that they missed appointments because they lack transportation to the office. Studies on the Housing Choice Voucher, child care, Head Start, and Food Stamp programs reported similar findings. Many state and local officials we visited agreed that transportation is a barrier to participation. A TANF, Food Stamp, and Medicaid official in Washington County, Maryland, told us that public transportation is limited and getting to and from the program offices poses a barrier to participation for many individuals, especially those in rural communities. According to officials, participants without vehicles have to seek rides from family members or acquaintances to access services. For many of these participants being seen going to the welfare office is difficult and sometimes stigmatizing. Some programs—including child care, Housing Choice Voucher, and Medicaid—offer benefits that can be difficult for participants to use, as some service providers—landlords, day care workers, or health care providers—will not exchange their services for program benefits. For example, our prior work found that the proportion of providers who will accept child care subsidies varied widely by state, ranging from 23 to 90 percent. Even in cases where providers accepted subsidies, the number of slots for children that used child care subsidies was limited. Additionally, several state and local officials suggested that participants are sometimes challenged to find health care providers that accept Medicaid. According to officials in Chisago County, Minnesota, participants have to drive up to 200 miles to visit a dentist that accepts Medicaid. From our literature review and interviews with program officials, we also found that many individuals do not participate in low-income programs because they do not know that they are potentially eligible for benefits. Numerous studies have documented instances where participation was compromised because individuals were unaware of program benefits or had misconceptions about eligibility. For example, several EITC, Medicaid, Food Stamp, Head Start, and SSI studies indicate that one of the primary barriers to participation is that individuals do not know that they are eligible for these benefits. For some individuals—like the elderly and non-English speakers—this unfamiliarity with program benefits is even more widespread, creating a larger barrier to participation and an under representation of these individuals in the caseload. Many program officials we visited agreed and explained that individuals also have misconceptions about program eligibility. For example, some individuals do not believe that they are eligible for benefits because they are employed while others do not want to be attached to the perceived stigma associated with the programs. In Los Angeles, Medicaid officials noted that rumors circulated about the eligibility criteria for Medicaid prevent many potential recipients from applying for the program. Particularly in non-English speaking communities, information about social services is often received through word of mouth; thus, when incorrect information circulates, it can have a significant impact on participation. Eligibility verification requirements—rules put in place to improve program integrity by ensuring that only those eligible for program benefits receive them—can also affect the overall number and characteristics of people who participate in the program. Each low-income program in this study has a set of verification requirements that individuals must complete to establish and maintain eligibility. Common requirements among the programs include completing application forms and providing necessary documentation. In addition, some programs require applicants to take additional steps, such as attending face-to-face interviews, documenting parental or spousal assets, or participating in orientation classes. According to our literature review, the complexity of verification requirements can impact the number of enrollees, especially for individuals with mental disabilities or who are homeless. One WIC study reviewed participation rate differences among states and found that states that required applicants to provide proof of income (before it was federally mandated) and had stricter program rules had lower program participation than states that did not require income documentation. Additionally, another study examined EITC participation rates across states and found that differences in participation rates are due in part to the applicant having help completing the complex tax forms. Many state and local officials agree that strict and/or complicated verification requirements can decrease participation, particularly among certain groups of people. For example, we found: According to officials, the Pell Grant application is very long and complicated and can be difficult for students and their parents to complete. Staff from an organization in St. Paul, Minnesota, dedicated to assisting low-income students to access higher education told us that many low-income students in the area would not be able to complete the forms correctly without their assistance. CCDF Officials in Connecticut and Georgia told us that the application process can be complex and difficult for applicants and, as a result, many applicants submit their applications late or incomplete. Because there is very high demand for child care subsidies, these families are often denied benefits or removed from the waiting list. Local SSI officials told us that participants face many life changes such as frequent hospitalization and institutionalization, moving into different family households, and changes in the hours that they work. To remain eligible for program benefits, SSI recipients must report these types of changes within 10 days from the end of the month of the change. Finally, while each program has its own set of verification requirements, these requirements become even more challenging when participants are applying or maintaining eligibility for multiple programs. Program officials in Maryland told us that one of the biggest barriers to accessing multiple means-tested programs is the many different eligibility criteria and reporting requirements of the various programs. According to state officials in Maryland, the TANF and Food Stamp Programs are relatively well aligned, but Medicaid has not coordinated as much with the other programs. According to officials, this can be problematic because a participant who is turned down for one program might assume that he or she is ineligible for other low-income programs and may fail to apply for benefits for which they are eligible. Factors that influence participation also impact other aspects of program access such as targeting and program interactions among all low-income programs, but particularly among non-entitlement programs whose enrollment levels are determined largely by funding levels. The non- entitlement programs covered in this review—CCDF, Head Start, Housing Choice Vouchers, Public Housing, SCHIP, and TANF—have funding limitations that could restrict the total number of people who could potentially participate in the programs and, as a result, coverage rates for these programs may be determined, at least in part, by funding. However, among programs that allow states substantial flexibility in determining program eligibility and implementation such as CCDF, SCHIP, and TANF, funding constraints contribute to, but do not determine coverage rates to the extent that they do for programs that allow less state and local flexibility. Given funding constraints for non-entitlement programs, agencies are challenged to allocate scarce resources appropriately by targeting their benefits to specific groups of people and planning strategically to ensure that the program complements a broader range of programs administered by the same agency or that provide similar services or benefits to the program’s eligible population. Because factors that influence participation often affect the composition of a program’s caseload, some agencies that administer programs with limited funding have put measures in place to ensure that their caseloads reflect program priorities—whether that involves targeting certain groups or ensuring that benefits do not disproportionately favor or exclude certain subpopulations. For example, although the public housing program generally does not target benefits to households with the lowest incomes, it allows priority to be given to elderly and disabled recipients. Agency officials reported that they track participation of these groups to ensure that their share of the total caseload does not change dramatically from 1 year to the next. Similarly, state child care officials told us that funds are generally provided to recipients based on priority groups that favor families receiving public assistance and those with the greatest financial need. Additionally, many agencies are cognizant of how their programs interact with and complement other programs that serve the same population. For example, some child care programs pair with half-day Head Start programs to provide continuous care to children of working parents; Medicaid and SCHIP programs within some states coordinate their enrollment processes to facilitate access to health insurance for individuals in the state; and between 1998 and 2003, HUD responded to a decrease in the number of public housing units available for low-income households by increasing resources available through its voucher program to maintain coverage for families. Likewise, in allocating TANF resources, programs often coordinate with local workforce development agencies, child care programs, and other programs that provide similar services to best meet the needs of their clients without duplicating an existing effort. In these ways, programs acknowledge and respond to access issues and ways in which factors affect access other than participation levels and rates. Program administrators have implemented many strategies to achieve desired program outcomes, including those related to program access and program integrity, but while agencies have generally taken steps to monitor and disseminate information on program integrity, few track and report on the extent to which they are serving their eligible populations. The programs we reviewed have many goals and objectives of varying degrees of importance to the agencies that administer them, among them program access and program integrity. Strategies implemented at the federal, state, and local level to address these two issues include information systems, data sharing, and technological innovation, changes to the application and eligibility verification process, and outreach and coordination with other programs. Although federal agencies do not have direct control over many of the strategies we identified—including those mandated by law and those initiated at the state and local level—federal managers can play a role in encouraging and facilitating such strategies by emphasizing program access and program integrity as federal agency priorities. In response to the Improper Payments Information Act of 2002, all of the federal agencies we reviewed have taken at least some steps to identify and begin reporting information to Congress and others on the extent of improper payments. We also found, however, that for several of the programs we reviewed, federal agencies have not taken steps to identify and use participation or coverage rate information in managing their programs. Having up-to-date information on the extent to which their programs reach eligible individuals can help agencies plan strategically, set priorities, and ensure that agencies are reaching eligible families. The state, local and federal program officials we spoke with identified several strategies that they believe may improve either of the two objectives—program access or program integrity—without harming, and in many cases improving, the other. As mentioned, some efforts to increase program integrity and prevent fraud, such as increasing the amount of eligibility verification required, may actually hurt access to the program by deterring even those eligible from applying. However, program administrators told us of several strategies that increase access while maintaining and even improving integrity. The complementary strategies we identified are enabled by information systems, data sharing, and technological innovations, changes in the application and eligibility verification process, and outreach and coordination with other programs. Improved information systems, sharing of data between programs, and use of new technologies can help programs to better verify eligibility and make the application process more efficient and less error prone. These strategies can improve integrity not only by preventing outright abuse of programs, but also by reducing chances for client or caseworker error or misunderstanding. They can also help programs reach out to populations who may face barriers. One strategy involves sharing verified eligibility information about applicants across programs. Data sharing prevents applicants from having to submit identical verification to multiple programs for which they may be eligible, and it can also speed up the sometimes-lengthy application process. In addition, data sharing allows programs to check the veracity of information they receive from applicants with other databases. In Minnesota, WIC caseworkers are able at the time of application to determine participant eligibility by accessing basic eligibility information via technology systems from other means-tested programs, such as Medicaid, food stamps and TANF. SSI administrators told us that during the application interview, income data is entered in their computer system. This information is then checked with a number of other databases including those of the IRS, the Department of Veterans Affairs, and the state Civil Service Administration to quickly verify earnings and other eligibility requirements and inconsistencies are flagged. Similarly, Georgia has an interactive computer system that aggregates client information for TANF, Medicaid, and the Food Stamp Program and calculates benefits for all three programs. Additionally, when clients need to make a change in their eligibility information, these data are automatically changed for all three programs, so that clients can receive the appropriate level of benefits based on their most current eligibility information. Furthermore, the data reliability matches may include information from the Department of Labor wage match, SSI benefits, prison information, and information on a person’s death. If any of the matched information conflicts with the information provided by the client, a caseworker is to ask additional questions of the client. In all these ways, caseworkers can accelerate the application process while capitalizing on verified eligibility information already provided to other programs. Some administrators told us that such data sharing, however, can be complicated by concerns over ensuring privacy and by insufficient technological capacity. Local program administrators told us that eligibility data for Medicaid and SCHIP is tightly controlled because of Health Insurance Portability and Accountability Act (HIPAA)—a law that limits access to an individual’s health information—but that its data might be shared, if warranted, with proper consent. Data sharing can be impeded by technological limitations, such as computer systems that cannot communicate directly with other systems over the Internet or some other network, or a lack of software needed to translate information into formats that other computers can understand. Our prior work on data sharing identifies strategies agencies have implemented to address concerns about privacy and technological capacity issues. Some agency officials told us of ways in which on-line applications can improve both access and integrity. As stated, program integrity is compromised not only by willful misrepresentation, but also by applicant and caseworker error. Web-based or partially web-based applications can automatically check that all required fields are completed before an application is submitted for review and automatically calculate benefits thereby reducing error. Moreover, having the option of an on-line application can increase access to those who are eligible for benefits but who may have difficulty physically getting to program offices. Minnesota has instituted an on-line interactive application for Medicaid that program administrators believe will make the application process easier for both participants and caseworkers. In addition to making the process easier, the system will also check verification information with a variety of other data sources and thus reduce the likelihood of errors as well. In this way, a potentially burdensome application process becomes simpler, more accessible, and less error prone. Similarly, eight states accept food stamp applications through the Internet. Of course, this strategy is less viable or effective where access to computers is limited or where the on-line environment cannot be adequately secured. Program administrators told us about several changes in the application and eligibility verification process that have improved their ability to promote access and ensure integrity. Efforts to increase the accuracy of eligibility and benefit determination can increase access by reducing the number of applicants who were incorrectly denied benefits or whose benefit determinations resulted in underpayments. For example, streamlining or simplifying the application and providing applications in other languages can increase access, but also promote program integrity by reducing errors or misunderstanding in the application process. Simplifying applications by clarifying confusing wording and removing redundant questions as well as providing applications in an applicant’s native language can reduce the possibility of these types of errors. According to federal officials, most states use a simplified application form for the SCHIP program to remove application barriers for families. For example, the SCHIP program applications in both Connecticut and California have been reduced from 17 and 27 pages respectively to only 4 pages each without sacrificing required information, according to state officials. In many cases, the SCHIP streamlined application forms caused corresponding changes in state Medicaid applications as well. In addition, many of the program administrators with whom we spoke told us that their applications and forms were available in many languages. For example, the Minnesota Department of Revenue now has state EITC tax forms and fact sheets directing them to the federal EITC in eleven languages. Another way in which programs can streamline the application process is through categorical eligibility with other programs, which can be set at the federal or state level depending on the program. Categorical eligibility, also known as adjunctive eligibility, allows an individual who has been found eligible for one program to be automatically granted eligibility for another program, typically one with less stringent eligibility criteria. For example, if an individual is receiving SSI, federal legislation generally requires that that person be automatically eligible for Medicaid. Similarly, if a pregnant woman is receiving Food Stamps, Medicaid, or TANF, then she automatically qualifies for WIC benefits—and thus does not need to provide income verification once again when applying for the program. States also have flexibility to grant automatic eligibility for other programs. A state WIC program, for example, may grant automatic eligibility for families receiving Free and Reduced Price School lunches. These adjunctive or automatic eligibility policies allow ultimately for simpler applications, which may enhance access and reduce error. In fact, federal administrators at USDA noted that adjunctive eligibility is one of the most important tools now used to address program integrity and access issues in a way that cuts across programs. The provision of more comprehensive in-person assistance in filling out applications can encourage a candidate eligible for benefits to complete an otherwise lengthy, complicated, or intimidating application process. Such personal assistance can also prevent errors that result from misunderstanding of application questions or requirements or even fraud by reminding applicants in person that fraudulent claims are punishable by law. For example, to increase access to the program, California program officials told us that the SCHIP program developed a training process that certified staff from other entities, such as schools and community-based organizations, as “application assistants.” These “application assistants” are also trained to recognize when applicants seem to be submitting false information and are prohibited from assisting families who have previously been found to have committed fraud. In a Maryland SSA office, SSI program administrators told us that they had abbreviated their application form so that most information is now collected in an interview. This process helps improve access for those with limited literacy or those who may be discouraged by a long paper application. Officials also noted that this process helps program integrity because the interviews allow caseworkers the opportunity to remind clients that fraudulent claims are punishable by law. Similarly, EITC officials in Minnesota told us that free tax preparation assistance could make the process less complicated for filers and could also cut down on fraud from tax preparers. As one official explained, many low-income families live near a high concentration of paid tax preparers, who have incentives to fraudulently file for a high credit. While more in depth application assistance has these benefits, it can pose challenges to programs in ensuring the quality of the assistance offered and it can be costly. According to California SCHIP officials, funding to train “application assistants” for the SCHIP program in California was ended in May 2002 due to budget constraints. Lastly, some policies put in place to prevent fraud at the time of application can also help with program access. Connecticut Department of Social Services has begun a fraud prevention program, termed FRED (Fraud Early Detection), which was instituted to address an increase in the amount of fraud and error detected in the program. The purpose of FRED is to identify potential fraud cases before benefits are paid out to prevent improper payments rather than identifying fraud after the payments have been made. Under FRED, cases that meet 1 of 12 risk criteria are to be referred to a fraud investigator who is to visit the home within 10 days of the referral. While the investigator is visiting the home, he/she is to interview the client about aspects of the risk criteria, and make a recommendation regarding eligibility. While the primary purpose of the visit is to prevent fraud, in practice investigators have found that their role is also to explain the eligibility criteria more clearly to applicants and to explain what steps they need to take in order to be eligible for program benefits. In addition, investigators are trained to provide additional information to the family on other resources available to them and to assist them with referrals, thereby increasing access to an array of related social services. Targeted outreach to populations that might have more barriers to accessing the program has the potential to increase access without necessarily compromising integrity. Many program officials we spoke with told us that they had outreach strategies to reach out to populations—such as rural-dwellers, the elderly, or those with limited English proficiency— that have had trouble accessing the program. Such strategies include information sessions or pamphlets in ethnic community centers or partnering with advocacy groups. In addition, cross agency referrals for applicants already found to be eligible and more overall agency coordination can improve access without sacrificing integrity. Many agency officials with whom we spoke told us that they coordinated with other programs to reach potential clients. In Maryland, the Department of Social Services invites the Volunteer Income Tax Assistance program to work out of its office, so that those low-income earners already in contact with social services can have better access to the EITC. Some offices in California participate in an open door policy program in which clients can access any social service program through referrals at any state government office they walk in to. Some program administrators told us that program co-location can help increase access. This happens when an individual applying for one program in an office is referred to and immediately serviced by another program in that same location. Some examples we found had Food Stamps located in TANF offices and a WIC clinic collocated with the state Children’s Health Insurance Program. All of the programs we reviewed emphasize both program access and program integrity, and as we have described, many federal, state, and local program administrators have implemented strategies intended to achieve access and integrity goals. However, not all agencies responsible for administering these programs have established management practices, in keeping with standards of internal control, that will consistently guide them in achieving these outcomes. Among the standards that are particularly relevant for agencies responsible for administering means- tested programs are: establishing and maintaining a control environment, a culture of accountability that sets a positive and supportive attitude toward achieving management objectives; monitoring program performance over time; and recording and communicating relevant, reliable, and useful information to management and others who need it to carry out their internal control and operational responsibilities. To achieve desired program outcomes and safeguard federal funds, these standards should be part of an agency’s operational and management practices. While the agencies we reviewed have taken steps to establish internal controls for improper payments and program integrity, not all have established similar controls for achieving desired program access outcomes. There are many ways in which federal agencies can establish a culture of accountability, one that “sets the tone” for program administrators at all levels of government to emphasize certain priorities. One way to do this is to monitor progress toward achieving desired program outcomes, and another is to effectively communicate program information in a broad sense, with information flowing down, across, and up the organization. Depending on the program, some aspects of an achievement-oriented environment are codified in laws, regulations, and federal policies. For example, the Improper Payments Information Act of 2002 requires those programs and activities susceptible to significant erroneous payments to calculate annual improper payment estimates and sets statistical sampling confidence and precision levels for estimating those payments. As shown in table 9, some of the federal agencies covered by this review have already begun to monitor improper payments for the programs they administer and, as directed by OMB guidance, communicate this information by including a measure of improper payments in their annual Performance and Accountability Reports. Including measures of improper payments in agency annual performance plans and accountability reports is one way to provide officials with important information on areas of success and areas that need improvement and to send a signal to all working on agency programs that the measured goal is a priority. (For information on the amounts of improper payments reported by agencies, see app.III.) To varying degrees, the other agencies included in this review have begun taking steps to identify and report on improper payments as well. While agencies are taking steps, much remains to be accomplished before a comprehensive picture is available of the amount of improper payments for these programs. While an internal control framework is emerging for addressing financial program integrity issues, few of the agencies responsible for administering the programs we reviewed have established a similar framework for achieving desired program access outcomes, although they have implemented numerous strategies to increase program participation. The 12 programs we reviewed have numerous program goals and objectives of varying degrees of importance to the agencies that administer them. Because the most fundamental purpose of these programs is to serve low- income individuals and families, all track measures that provide some information about program participation and access. For example: All of agencies responsible for administering the programs we reviewed track the number of people participating in their programs. Some federal agencies—such as HUD—measure the extent to which program participants redeem or use benefits provided to them. Others use other measures related to access. CMS tracks the number of individuals who are uninsured each year. Many agencies track participants and participant outcomes through program effectiveness studies. Through these efforts, agencies have demonstrated their concern about program access, yet, the ways in which most of these agencies measure program participation do not inform administrators about the extent to which they are meeting overall need and therefore do not provide all the information they need to monitor access. As discussed above, access encompasses not only the number of program participants, but also the extent to which programs reach the total population eligible for benefits and services, how well agencies are allocating resources among targeted subpopulations, and, from an agencywide or even governmentwide perspective, how well specific means-tested programs complement and interact with programs that serve similar populations or provide for similar needs. While concerns about the allocation of scarce resources and interactions with other programs apply to all low-income programs, they may be of particular importance to non-entitlement programs whose participation is limited by funding constraints. Participation or coverage rate information can guide program administrators in setting priorities and targeting scarce resources, even among programs that were not intended to serve everyone eligible for program benefits. As shown in table 10, only one of the agencies we reviewed, USDA, regularly estimates participation rates for the programs it administers and communicates and disseminates information on these rates in a way that promotes a culture of accountability supportive of achieving program access outcomes. USDA has been estimating participation rates annually for the Food Stamp Program since 1975. The agency uses this information to get a better understanding of how well they are reaching certain priority subpopulations such as the working poor, single-parent families with children, and SSI recipients; and to identify states in need of outreach assistance. In addition, USDA has established a performance goal to increase participation rates in the Food Stamp Program and reports food stamp participation rates in its annual performance report. In these ways, the agency uses this information for program management purposes. USDA has taken steps to develop a reliable participation rate measure for the WIC program and plans to estimate the WIC participation rate annually. Because the WIC program is not an entitlement and participation is determined by the appropriated funds, FNS officials told us they do not plan to establish a performance goal related to the WIC participation rate. However, officials told us that they do think it is important to monitor and report on the WIC participation rate for policy, planning, and budgeting purposes, and the agency plans to use these data in managing the program by highlighting the measure in the program’s budget reports and its annual report to Congress. HHS plays a key role in estimating participation rates for several means- tested programs. Since 1973, HHS’s Office of the Assistant Secretary for Planning and Evaluation has funded TRIM3 for the purposes of policy development, analysis, and research on such topics as the interactive effects of programs serving low-income families. The Welfare Indicators Act of 1994, requires HHS to submit annual reports to Congress on welfare receipt in the United States and key indicators and predictors of welfare dependence. These reports, produced annually since 1997, include participation and coverage rates from TRIM3 for TANF (and its predecessor program, Aid to Families With Dependent Children), SSI, and food stamps. HHS includes CCDF coverage rates in the biennial congressional report and plans to include a performance measure related to participation rates in its annual performance reports starting in fiscal year 2006. However, HHS generally does not use participation rate estimates it has available from TRIM3 in managing its other programs by highlighting the information in its annual performance plans or congressionally mandated program reports. For example, TANF participation rates are not included in the agency’s TANF Annual Report to Congress and Medicaid and SCHIP participation and coverage rates are not included in any of the other congressionally mandated reports disseminated by CMS, the agency that administers these programs. Only one of the other agencies we reviewed, the IRS, has plans to estimate and report on participation rates for the programs they administer. IRS plans to begin estimating the participation rate in the EITC program on an annual basis starting in fiscal year 2005 and to use the EITC participation rate as a performance measure in its annual performance report. However, the other agencies we reviewed, including SSA, HUD, and ED, do not estimate participation or coverage rates for the SSI, Housing Choice Voucher, Public housing, or Pell Grant programs and reported no plans to do so. There are several reasons agency officials cited for not collecting participation rate information. For some programs, information on certain eligibility criteria can be difficult or costly to obtain. For example, reliable data on the disability status of children who do not receive SSI is not readily available from national survey data. We were unable to provide a participation rate estimate for this group for this reason. Absent a legislative or other mandate to estimate participation and coverage rates, agency officials may not consider conducting such estimates a priority. However, some agency officials that do not regularly estimate participation or coverage rates indicated they would find such estimates useful. For example, HUD officials indicated that the agency finds its estimates of worst case housing needs, which include some coverage rate measures for all housing programs, useful for policy and planning purposes. Similarly, while Head Start officials do not publish or report coverage rate estimates in any agency documents, they indicated that they occasionally estimate participation rates informally for internal use in policy, planning, and budgeting. Federal agencies responsible for administering means-tested programs are charged with ensuring that those eligible for program benefits and services are able to access them and with ensuring prudent oversight of taxpayer dollars by maintaining program integrity, including preventing improper payments. Program administrators may find it challenging to achieve these goals in part because they may conflict—ensuring program integrity can potentially come at the expense of program access. While we identified a number of strategies that have been implemented by federal, state, and local agencies that could potentially achieve both of these goals, unless agencies track progress toward meeting these goals in a way that signals to program administrators at all levels of government that both access and integrity are federal priorities, there is a risk that one of the goals may be compromised. Estimating participation rates for means-tested programs is difficult—and more difficult for some than for others—and cannot be considered “an exact science.” Nevertheless, when done rigorously—in a way that ensures that estimates are comparable over time and quantifies the errors that result from the estimation methodology—these estimates can provide a basic understanding of the extent to which needy populations are being served by federal programs and can be a key component of a federal agency’s efforts to establish a culture of accountability that emphasizes the importance of program access. Federal agencies responsible for administering only four of the programs we covered in this review—CCDF, food stamps, WIC, and EITC—either currently collect and report information on the extent to which they are reaching their target populations in key performance and program reports or plan to do so. As a result, the federal agencies that administer the other nine programs covered by this review may lack data that could inform their budgetary and programmatic decisions, assist them in ensuring that their programs are coordinated with and appropriately complement other programs available to their target populations, help them to manage their programs effectively, and enable them to provide policymakers with the information they need to make decisions. As the department moves forward with participation rate estimates for the WIC program, we recommend that the Secretary: 1. take steps to clarify to users the limitations of the estimates; and 2. comparability between estimates by reanalyzing data as estimation methodologies change, so that consistent methods are applied over time. To help ensure that agencies have information on program access, we recommend that the Secretaries of Education and HUD study the feasibility of calculating participation or coverage rates and including them in key program management reports. We recommend that the Secretary of HHS consider making some improvements to the participation and coverage rate information produced for the CCDF, Medicaid, SCHIP, and TANF programs, such as: 1. quantifying errors that result from calculating these estimates to help users better understand the accuracy of the data; and 2. ensuring, to the extent possible, that estimates are comparable over time. We also recommend that the Secretary include participation and coverage rate estimates in key program reports for these four programs. We also recommend that the Secretary study the feasibility of estimating the coverage rate for the Head Start program on a regular basis and to include these estimates in key Head Start program reports. As the IRS moves forward on developing participation rate estimates to use as a program performance measure for the Earned Income Tax Credit, we recommend that the Commissioner of the Internal Revenue Service take steps to quantify errors that may result from estimating these participation rates to help users better understand the accuracy of the data and ensure that estimates will be comparable over time. To help ensure that the agency has information on program access, we recommend that the Commissioner consider the feasibility of providing participation rate information (from the existing source or another source) in SSA program management reports. We provided drafts of this report to and received comments from the Departments of Agriculture, Education, Health and Human Services, and Housing and Urban Development; the Internal Revenue Service; and the Social Security Administration. They generally agreed with our conclusions and recommendations. Agency written comments appear in appendixes IV through VII. On February 3, 2005, we met with FNS officials to discuss their comments. The officials said they agreed with our conclusions and recommendations. They stated that they recognize the importance of tracking both program access and program integrity and their ongoing efforts to estimate participation rates in the food stamp program reflect their commitment to balancing the goals of access and integrity. They stated that as they move forward in developing a participation rate estimate for the WIC program, they plan to take steps to clarify to users the limitations of the estimates and to ensure that the estimates will be comparable over time. Agency officials also provided technical comments, which we incorporated in the report where appropriate. The Department of Education agreed that data sources now available would not provide sufficient information to develop a reliable participation rate estimate for the Pell Grant program. However, the department cited a study currently being conducted by the National Center for Education Statistics that will determine the feasibility of implementing a system of collecting data on all postsecondary students. This study will also provide information that could be used to examine the feasibility of estimating participation rates for the Pell Grant program. The Department of Health and Human Services provided comments related to each of the five programs under its jurisdiction. The department noted that it has taken steps to measure program access in the TANF, CCDF, Medicaid, and SCHIP programs through their funding of TRIM3, although they do not always report these estimates in key program reports. With regard to CCDF, HHS recognized the importance of coupling efforts to control improper payments with efforts to provide effective service delivery, and therefore access. The department noted that it has recently established a performance measure related to coverage rates for the CCDF program. We incorporated this information into the report. HHS also thought that we overstated the differences between CCDF and TANF in the impact of state flexibility on coverage rates so we changed that section of the report. With regard to TANF, HHS raised the concern that increasing coverage rates may conflict with other TANF program goals and consequently, coverage rates are not an appropriate performance measure for the TANF program. We recognize that increasing the coverage rate is not an explicit goal of TANF and believe this concern is consistent with the report’s conclusion that increasing program access may not be feasible or desirable for some programs. Nevertheless, we continue to believe that including program participation and coverage rates in key program reports can help inform policymakers and program administrators at all levels of government and assist them making budgetary and programmatic decisions. In its discussion of SCHIP and Medicaid, HHS indicated that the report reinforced the agency’s approach to promoting both access and integrity. The department noted that CMS is working to improve its coverage rate estimates for SCHIP and meanwhile, it continues to measure progress toward increasing coverage among children under 200 percent of poverty, a proxy for the program coverage rate. HHS also highlighted several other issues that we believe are already sufficiently discussed in the report. These include the fact that not all those eligible and not participating need program services; that some programs allow states considerable flexibility in implementing programs and determining eligibility; that participation or coverage rates can be challenging to estimate accurately; and that uncertainties and limitations surrounding participation and coverage rate estimates should be made clear to users of this information. HHS also reiterated some of the limitations of our cost estimates and our Head Start coverage rate estimates. Agency officials also provided technical comments, which we incorporated in the report where appropriate. Department of Housing and Urban Development officials provided us with technical comments, which we incorporated in the report where appropriate. The Internal Revenue Service agreed with our conclusions and recommendation. The IRS recognizes the interactions between efforts to improve program access and those intended to improve program integrity, and as part of their efforts to utilize a balanced approach that emphasizes both goals, the IRS has recently established the EITC participation rate as a GPRA measure. As it moves forward in developing the EITC participation rate, the IRS plans to take steps to quantify errors that may result from estimating these participation rates to help users better understand the accuracy of the data and ensure that estimates will be comparable over time. Agency officials also provided technical comments, which we incorporated in the report where appropriate. The Social Security Administration agreed with our conclusions and recommendation. The agency noted that it has considered the feasibility of estimating participation rates in the past and has calculated participation rates among the elderly. However, the agency also noted that limitations of the national survey data make such estimates challenging for the SSI program. The agency agreed that estimating participation rates on a regular basis could provide an important management tool. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will send copies of this report to the Secretaries of the Departments of Agriculture, Education, Health and Human Services, and Housing and Urban Development; the Commissioners of the Internal Revenue Service and the Social Security Administration; relevant congressional committees; and others who are interested. Copies will be made available to others upon request, and this report will also be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (415) 904-2272. Additional GAO contacts and acknowledgments are listed in appendix VIII. We designed our study to provide information on (1) the proportion of those eligible who are participating in 12 selected low-income programs; (2) the factors that influence participation in those programs; and (3) strategies used by federal, state, and local administrators to improve both access and integrity, and whether agencies monitor access by measuring participation rates. To obtain information on these issues, we compiled estimates of participation and coverage rates for each of the programs covered by the review for the most recent year for which data were available, we reviewed literature on participation in programs for low-income people, and we gathered information directly from both federal program officials and state and local administrators. To determine the completeness and accuracy of data obtained from various sources, we took several steps, described below, to assess the data and determined that the data were sufficiently reliable for use in this report. We performed this work between November 2003 and January 2005 in accordance with generally accepted government auditing standards. To obtain information on the proportion of those eligible who are participating in 12 selected low-income programs, we used information developed by the Urban Institute under contract with HHS about the participation and coverage rates they estimated for 7 programs—CCDF, Medicaid, TANF, food stamps, SCHIP, SSI, and WIC. Also, we estimated the coverage rates for the Head Start program using administrative data and CPS data and for the Housing Choice Voucher and Public Housing programs using data from a HUD report on worst case housing needs in 1999. The participation rate estimate for the EITC program was from our previous work. In addition, we performed a literature review to obtain information about program participation and coverage rates that were estimated by other organizations. When possible, we obtained rates from 1997 to the most recent year available, but as mentioned in the report, we did not present many estimates of rates prior to the most recent year available because changes in methodologies did not allow us to determine whether changes in the rates over time were actual changes in participation rates or resulted from changes to the estimation methodology. We were unable to provide a Pell Grant program participation rate because we were unable to assess the reliability of available data on the family income of students who did not apply for federal financial assistance, which is necessary for determining program eligibility. Table 11 summarizes the methodologies used to calculate participation and coverage rates for the 12 programs covered in this review. The Urban Institute estimated participation and coverage rates for CCDF, Medicaid, TANF, food stamps, SCHIP, SSI, and WIC under a contract with the Department of Health and Human Services (HHS). In order to evaluate the reliability of these data and to determine whether it met our standards for reporting quantitative information, we contracted with the Urban Institute to obtain additional information about these estimates and the methodologies used to calculate them, including the Transfer Income Model, version 3 (TRIM3). The TRIM3 microsimulation model was used to determine the number of people who would be eligible for a program if they applied. This model simulates the process that a caseworker would undergo to determine eligibility by reviewing individual or household characteristics such as household composition, income, disability, and other factors as appropriate for the programs. TRIM3 primarily relies on the Census Bureau’s Annual Social and Economic Supplement to the Current Population Survey to estimate the eligible population. For some programs, this data source does not include all information needed to determine a person’s or household’s eligibility. For example, the data do not include information on the value of certain assets that might be considered during eligibility determination for some programs. In these instances, the Urban Institute researchers used TRIM3 to assign eligibility-related characteristics to individuals included in the CPS who they believed were likely to have them, a process referred to as imputation. The estimates of the eligible population were used as the denominators of the participation and coverage rates, and administrative data on the number of participants were generally used as the numerator. In general, groups excluded from the TRIM3 eligibility estimate used for the denominator (such as those who are institutionalized) were also excluded from the administrative data used for the numerator. The CPS data limitations and the process of assigning characteristics may have resulted in errors in the estimates of the eligible population and, therefore, the participation and coverage rates for seven programs. These include sampling error, non-sampling error, error related to the imputation process, and error related to program implementation. GAO statisticians and social science analysts reviewed the Urban Institute’s TRIM3 technical documentation, interviewed the Urban Institute researchers, requested additional TRIM3 calculations from the Urban Institute, and reviewed available academic and government literature on the sources of error in microsimulation models of low-income program participation to determine what errors existed. Although there are several types of potential errors, not all the errors could be quantified. The CPS, like all sample surveys, is subject to sampling error. This error reflects the difference between (1) the results that are obtained from the members of the sample that are actually surveyed and (2) the results that would be obtained if each member of the population were surveyed. We contracted with the Urban Institute to calculate the sampling error of CPS data used in TRIM3, and we verified that they used the appropriate procedures to make these calculations. We used the estimates of sampling error provided by the Urban Institute to calculate a range for the most recent TRIM3 estimates at the 95 percent confidence level, and these ranges are presented in the report. However, the ranges underestimate the total error in TRIM3 estimates. The CPS data used in TRIM3 are also subject to non-sampling error that has not been quantified. These errors can occur if survey respondents have difficulty interpreting a particular question, lack information necessary to answer a question, are uncomfortable with accurately reporting certain sensitive information, or do not answer certain questions, among other factors that affect data collection and measurement. Such non-sampling errors are likely to have affected variables that are used by TRIM3, such as labor force participation for example. According to a 2002 Census Bureau report, the 1996 CPS covers a smaller proportion of the Black population (84 percent) than it does of the White population (94 percent). The Census Bureau reported that this low coverage rate biases labor force estimates, because those who are left out of the sampling frame are quite different from those who are included in it. Yet, the full extent of non-sampling error in the CPS is unknown. The process of assigning characteristics when CPS data are not available—the imputation process—also can produce error. In most instances, efforts were not made to quantify the error associated with these imputed data. However, a National Research Council (NRC) panel attempted to quantify such error when reviewing methods for estimating participation rates for WIC and found that estimates can vary noticeably depending on whether imputed or reported monthly income data are used to determine eligibility. During this review, the NRC panel compared TRIM-adjusted CPS data, in which monthly income is imputed on the basis of annual income, and Survey of Income and Program Participation (SIPP) data, in which monthly income is collected directly from survey respondents. Specification error can result when the variables in a microsimulation model incorrectly represent the process for determining program eligibility that actually occurs between caseworkers and applicants. For example, in its TANF module, TRIM3 does not consider certain aspects of program eligibility, including specific work requirements, behavioral requirements such as school attendance and immunizations, and sanctions from failing to comply with work rules or child support rules. These exclusions could cause TRIM3 to overestimate eligibility and therefore underestimate participation and coverage rates. Similarly, while several means-tested programs base their eligibility on the federal poverty guidelines, there is no universal administrative definition of “family,” or “household” that is valid for all programs that use the poverty guidelines. To the extent that federal programs use administrative definitions that differ somewhat from the statistical definitions used by the CPS, estimates of the eligible population could be affected. We estimated the participation and coverage rates for the Head Start, Housing Choice Vouchers, Public Housing, and EITC (in prior work). These estimates are also limited by the data sources and assumptions of who may be eligible for these programs. The EITC and Head Start rates were estimated using the CPS and, therefore, estimates are subject to sampling and non-sampling errors mentioned earlier. In addition, because the CPS did not have complete information to determine who may be eligible for the EITC, we made assumptions about whether children were in a household long enough for them to be counted as a member for tax purposes. We also assumed that the lack of data on certain assets would not significantly affect participation rate estimates. For Head Start, the administrative data did not allow us to determine the number of children who are actually served in the program. The data were only on the number of program slots funded, and more than one child can be served with each slot over the course of a program year if there is turnover in participation. However, for the purposes of calculating Head Start coverage rates, we assumed that only one child filled a slot. We used a HUD report on the households with worst case housing needs in 1999 and administrative data to estimate the Housing Choice Vouchers and Public Housing coverage rates. HUD used data from the American Housing Survey in this report. We assessed the reliability of the survey data and determined that the data were sufficiently reliable for use in this report. In conducting this assessment, we reviewed Bureau of the Census studies on the survey. Census reported that the American Housing Survey underestimates income and overestimates poverty for households, variables used to determine the number of people with housing needs. We were unable to quantify the error this may have produced in the estimates. We determined that the administrative data used to estimate participation and coverage rates were reasonably reliable for this purpose. As a part of this assessment, we reviewed the data on the number of participants and we discussed this information with federal agency officials. However, with the exception of the EITC data, we did not adjust the number of participants to account for those who may have received program benefits and/or services in error. As a result, the administrative data could have produced errors in the estimates due to potential inaccuracies in reporting the number of eligible participants. To obtain the estimated cost of providing benefits to those eligible and not participating in the Food Stamp Program, SSI, TANF cash assistance and WIC, we contracted with the Urban Institute to provide us with these estimates. The organization used the TRIM3 model to estimate the total benefit amount for eligible non-participants, and these estimates were based on the amount that each individual or household would receive based on their characteristics. These estimates have similar limitations as their related participation and coverage rates, and not all of their errors could be quantified. The EITC cost estimate was based on our prior work, but the Head Start cost estimates were done during the course of this project. We estimated the potential costs for Head Start using administrative data on the average cost per slot and the number of eligible non-participating children we estimated using the CPS. We assume that only one child would be served with a slot, even though in actuality more one than can. We also did not account for variation in program cost at the local level, and we did not account for variation in local eligibility criteria. All of the estimates have two major limitations. We did not include the administrative costs of serving the non-participants; this would increase the cost estimates for all programs. For some programs, federal, state, and local governments would share these administrative costs. We also did not factor in policy or economic changes that may occur if participation increased in these programs. In addition, for all programs, we calculated the estimates for each program individually and we did not account for how changes in the participation for one program could influence benefit amounts in another. We were unable to estimate the cost of serving eligible, non-participants in the five other programs—CCDF, HCV, Medicaid, Pell Grants, Public Housing, and SCHIP—because we did not have enough information to determine how the characteristics of the non-participants would have affected the benefit amounts they could have received. To obtain information about factors that affect participation and strategies to improve access and integrity for the 12 selected programs, we conducted a literature review, obtained documentation from federal officials, interviewed federal officials and interviewed state and local officials in California, Connecticut, Georgia, Maryland, and Minnesota. The literature review consisted of studies on factors that affect participation and program integrity issues that we found through searches of the Internet and social science research databases and recommendations from federal agency officials. Among the documentation we reviewed for this objective were the available fiscal year 2004 and 2005 annual plans for each program and agency reports to Congress to determine whether the relevant agencies had stated measures for program integrity and access outcomes. We selected states to visit primarily based on their innovative practices to address barriers to participation for multiple programs we reviewed. We obtained information on innovative practices from federal officials, federal agency publications and a database on state and local initiatives complied by the Administration for Children and Families and several non-profit, non-partisan agencies. We also considered state efforts to ensure program integrity, program aspects that may create barriers to access and demographic and geographic diversity in making our selection. We were not able to address all the programs we reviewed during every site visit, and information from these site visits cannot be generalized to all states and localities and to all programs we reviewed. In addition, we did not assess the effectiveness of the strategies we presented. While participation and coverage rate estimates provide important program information that can be useful to both policymakers and program administrators, these estimates must be interpreted carefully to take into consideration such factors as survey data, research methodology, timeliness of estimates and the availability of alternative programs. These factors limit our ability to compare rates across programs and must be considered in efforts to define appropriate or desired participation or coverage rate levels. There are two national household level surveys conducted by the Census Bureau that can be used to estimate the total number of people eligible for low-income programs, the Annual Social and Economic Supplement to the Current Population Survey (CPS) and the Survey of Income and Program Participation (SIPP) and each has advantages and disadvantages. The CPS is the larger of the two surveys and data from this survey are available annually for the prior year. In addition, it has detailed information about income, household composition, labor force participation, and receipt of many federal benefits, but it does not include all the information that may be needed to determine program eligibility. For example: Although the value of assets such as automobiles and savings accounts are considered when determining financial eligibility for some of programs we reviewed, the CPS does not collect some of these data. The CPS also omits institutionalized individuals, some of whom may be eligible for Medicaid and SSI. Researchers at the Urban Institute and Mathematica told us that, while they were aware of these disadvantages, they continue to use CPS data in their calculations because of the relatively large sample size and because the CPS is conducted annually, which makes it a good source for yearly snapshots of participation. To compensate for the limitations in the data, researchers either make assumptions and assign information needed to determine eligibility, such as asset values, to individuals in order to determine their program eligibility or they omit certain subpopulations such as institutionalized individuals or pregnant and postpartum women from their participation rate estimates. The SIPP is a smaller survey than the CPS, but because it was designed specifically to better measure low-income program participation, it has some advantages over the CPS. Specifically, it includes more detailed information about individual characteristics, monthly labor force activity, monthly income, and household assets that factor in program eligibility determinations. However, the SIPP is designed as a panel study (following the same individuals for 2½ to 4 years) and thus may be more suited for use in examining detailed characteristics of participants and reasons for movement into and out of social programs. In addition, the sample faces attrition over the survey period, and some data needed to determine eligibility are available only for a limited number of years. The National Research Council used the SIPP to estimate WIC participation rates. This estimate included pregnant women in addition to infants and children, but by 2003 when this estimate was published, the most recent estimate they could create using SIPP was for 1998. There are some low-income programs for which neither the CPS nor the SIPP provide sufficient information to estimate participation rates. For example, to estimate the proportion of enrolled students eligible for a Pell Grant would require information on student enrollment status in addition to family income information. The National Center for Education Statistics within the Department of Education conducts a survey, the National Postsecondary Student Aid Study, that includes both student enrollment and family income information. However, concerns have been raised about the reliability of the family income data for students who did not apply for federal financial assistance. We were unable to assess the reliability of this data element and thus were unable to determine whether the data were sufficiently reliable for our purposes. As a result, we did not estimate a participation rate for this program. Because no existing national data provide every piece of information needed to determine the exact size of the eligible population for low- income programs, all participation rate estimates involve certain assumptions and generalizations. These assumptions can create errors that are difficult or impossible to quantify. Consequently, the margins of error we present, which only capture errors that result from sampling, could understate the true range within which participation rate estimates are most likely to fall. One way to better understand how these estimates are affected by the research methodology is to compare them to alternative participation rate estimates that have been calculated using different assumptions. While the Urban Institute and Mathematica Policy Research, Inc., both estimate food stamp participation rates based on CPS data, they use slightly different assumptions to estimate the eligible population. For example, because the CPS does not include information on which household members purchase and prepare food together, model estimates had to make assumptions about household food preparation habits. Despite differences in assumptions, the participation rates estimated by the Urban Institute and Mathematica were very similar. The most recent participation rate data available varies by program, and all estimates are at least 3 years old. As a result, changes in legislation, state policy, program administration, the national economy, and other factors that may have influenced the participation rate over the past few years are not reflected in these estimates. The EITC participation rate estimate is based, in part, on an estimated 32 percent of EITC claims being made in error in 1999. Since that time, Congress has enacted new tax laws and the IRS has taken steps to improve compliance. The total number of people receiving food stamps and Medicaid fell after passage of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 when participation in these programs was “de-linked” from participation in TANF. Since 2000, however, states have implemented policies to ensure that those eligible for food stamps and Medicaid continue to receive benefits when they leave TANF. Since 2000, the national economy has emerged from a recession, but economic growth, until recently, has been slow and uneven and marked by slow job growth. Economic conditions have historically been correlated with low-income program participation. None of these factors are reflected in the numbers we presented in this report. When interpreting the participation and coverage rate estimates we provided, it is important to keep in mind that those eligible but not participating in the programs we reviewed may not be living without assistance; many could be receiving assistance from alternative programs. In some cases, people could be eligible for more than one program we reviewed and able to receive similar services from these programs. Those eligible could also have access to state and local programs that address similar needs. This is particularly relevant for the non-entitlement programs covered by our review. The coverage rates we provide for the two housing programs in this review are calculated in isolation, and as a result, some of the households that appear eligible but not participating in the Public Housing program are receiving Housing Choice Vouchers (HCV), and vice versa. In addition, while HCV and public housing are two of the largest federal housing programs, there are several other federal, state, and local housing programs from which eligible households could receive assistance. HUD estimated that in 1999 about a quarter of all households eligible for any kind of housing assistance received assistance. The availability of alternative programs could also affect coverage rates for CCDF and Head Start. For example, state-financed pre-school programs provide child care and educational development services for many 4-year-olds who appear eligible but are not enrolled in Head Start. Children could be eligible for both programs but only receiving services in one, but this would not be reflected in program coverage rate estimates. HHS officials have recognized that other federal programs, such as TANF and Social Services Block Grant (SSBG), also provide child care to children eligible for CCDF. Because the CCDF participation rate does not reflect these additional sources of federal child care funds, HHS has produced a different coverage rate, estimating that about 26 percent of CCDF-eligible children were receiving child care services through one of four federal funding streams—CCDF, SSBG, TANF, and TANF state funds in 2001. State and localities are able to set the eligibility criteria for some of these programs, and the cost estimates do not account for the policy changes agencies may make if the current criteria resulted in increased participation. For example, if states found that current eligibility limits may result in increased TANF participation, they may change eligibility criteria or benefit levels to ensure state costs do not increase. Either of these changes may reduce the total additional costs. All cost estimates were calculated individually and, therefore, do not account for how participation changes in one program may influence benefit amounts in another. For example, Food Stamp Program costs may not be as high if participation in the SSI program increased because some eligible non-participants would qualify for lower food stamp benefits due to the cash assistance. Also, as mentioned earlier, other federal, state, and local programs may provide similar benefits that non-participants could be receiving, and, therefore, they may not need or want the benefits from the programs we covered. For example, food stamp eligible non-participants with short-term nutritional needs may decide to receive assistance through food banks. In addition, the TANF estimate we provide is only the cost of providing cash assistance to eligible people who are not participating; this program offers other supports to needy families, such as transportation to work and child care, that is funded by the federal and state government. Carolyn Blocker, Cheri Harrington, Danielle Giese, Amanda Mackison, and Kim Siegal also made significant contributions to this report. Jay Smale, Mark Braza, Doug Sloane, Ed Nannenhorn, and Beverly Ross provided technical assistance in developing and evaluating the reliability of participation rate and potential program cost estimates; Lise Levie provided key technical assistance; and Johnnie Barnes, Carrie Watkins, and Charles Wilson provided information on the two housing programs covered in the report. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. SSA Disability Decision Making: Additional Steps Needed to Ensure Accuracy and Fairness of Decisions at the Hearing Level. GAO-04-14. Washington, D.C.: November 12, 2003. Federal Student Aid: Expanding Eligibility for Less Than Halftime Students Could Increase Program Costs, but Benefits Uncertain. GAO-03-905. Washington, D.C.: September 10, 2003. Social Security Disability: Reviews of Beneficiaries’ Disability Status Require Continued Attention to Improve Service Delivery. GAO-03-1027T. Washington, D.C.: July 24, 2003. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO-02-615T. Washington, D.C.: April 10, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Medicaid and SCHIP: States’ Enrollment and Payment Policies Can Affect Children’s Access to Care. GAO-01-883. Washington, D.C.: September 10, 2001. Food Stamp Program: Program Integrity and Participation Challenges. GAO-01-881T. Washington, D.C.: June 27, 2001. Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and Benefits. HEHS-00-86. Washington, D.C.: April 14, 2000. Medicaid Enrollment: Amid Declines, State Efforts to Ensure Coverage after Welfare Reform Vary. HEHS-99-163. Washington, D.C.: September 10, 1999. Food Stamp Program: Various Factors Have Led to Declining Participation. RCED-99-185. Washington, D.C.: July 2, 1999. Medicaid: Demographics of Nonenrolled Children Suggest State Outreach Strategies. HEHS-98-93. Washington, D.C.: March 20, 1998.
Federal agencies that administer means-tested programs are responsible for both ensuring that people have appropriate access to assistance and ensuring the integrity of the programs they oversee. To balance these two priorities appropriately, it is important for agencies to have information on program integrity and program access. Knowing the proportion of the population that qualifies for these programs relative to the numbers who actually participate can help ensure that agencies can monitor and communicate key information on program access. To better understand participation in low-income programs, this report provides information on: (1) the proportion of those eligible who are participating in 12 selected low-income programs; (2) factors that influence participation in those programs; and (3) strategies used by federal, state, and local administrators to improve both access and integrity, and whether agencies monitor access by measuring participation rates. For 12 federal programs supporting low-income people, we found that the proportion of those eligible who are enrolled varies substantially both between and within programs. Among entitlement programs--those programs that provide benefits to all applicants that meet program eligibility criteria--these rates range from about 50 to more than 70 percent. Among non-entitlement programs--those with limited funding--these rates ranged from less than 10 percent to more than 50 percent. While it may be neither feasible nor desirable for programs to serve 100 percent of those eligible for benefits, information on the share of those eligible who are enrolled in means-tested programs and on particular recipient groups such as the elderly or families with children, can help program managers more effectively address issues related to program access. However, participation rate estimates must be interpreted carefully because of limitations in the data sources and estimation methodologies used to calculate the estimates. Many factors influence access to low-income programs--including the type of benefits, ease of access, misperceptions about program requirements, and application and eligibility verification procedures. These factors can impact not only the share of eligible people who participate in low-income programs, but other aspects of program access as well, including the composition of the program caseload and how programs work together to serve low-income individuals and families. Federal, state, and local administrators have implemented many strategies to achieve the goals of access and integrity, but federal agencies generally put more emphasis on tracking information and outcomes related to program integrity than program access. To better ensure that program administrators achieve program integrity goals, agencies have begun to develop measures to track and report on program integrity. Federal agencies have developed participation rate estimates for several low-income programs, but only four--CCDF, food stamps, WIC, and EITC--either currently collect and report information on the extent to which they are reaching their target populations or plan to do so. Such information can guide administrators in setting priorities and targeting scarce resources, even among programs that were not intended to serve everyone eligible for program benefits.
Unlike other federal organizations that must transform their existing cultures, TSA has the opportunity to create a culture that fosters high performance from the outset. For TSA, this means creating a culture that focuses on results rather than processes; matrixes rather than stovepipes; an external (citizen, customer, and stakeholder) perspective rather than employee empowerment rather than micromanagement; risk management rather than risk avoidance; and knowledge sharing rather than knowledge hoarding. TSA is an organization facing immense challenges to simultaneously build the infrastructure of a large government agency responsible for security in all modes of transportation and meet unprecedented deadlines required in ATSA to federalize aviation security. Two of the most significant deadlines require TSA to deploy federal passenger screeners at security checkpoints at 429 airports across the nation by November 19, 2002, and install explosives detection systems to screen every piece of checked baggage for explosives no later than December 31, 2002. In July 2002, we testified before your committee on the progress TSA has made in enhancing aviation security and in meeting the deadlines to deploy federal screeners at security checkpoints and to install explosives detection systems. At that time, we reported that while TSA’s efforts were well underway to hire and train thousands of key security personnel, including federal screeners and security directors, TSA had experienced unexpected delays in finding and hiring security screener personnel who met the requirements of ATSA. We also reported that while TSA had made progress in checking all bags for explosives and planning for the purchase and installation of explosives detection equipment, TSA had not kept pace with planned milestones to meet congressional deadlines for using explosives detection systems to screen 100 percent of checked baggage. In addition, we reported that TSA had not fully implemented the responsibilities required in ATSA such as the security of other modes of transportation, cargo security, and general aviation security. Finally, we also observed that the move of TSA from DOT to a DHS poses further challenges that may delay progress on meeting mandated deadlines and addressing other security vulnerabilities in the nation’s transportation system. TSA and DOT leadership have also testified before the Congress at several hearings on challenges TSA was facing as it tried to meet its deadlines and other transportation security responsibilities while establishing itself as a federal organization. Leadership stated that one of TSA’s challenges is to build a large organization from the ground up. Specifically, in January 2002, TSA only had approximately 15 employees of the more than 60,000 it reported it would need by the end of 2002. In addition, the Under Secretary of Transportation for Security testified that at that time the congressionally mandated cap on the number of employees it can employ of 45,000 would limit its ability to meet the deadlines. TSA also testified on the need for additional funding to meet its security responsibilities and the delays it experienced in receiving this funding. According to TSA and DOT, delays in funding and restrictions on the use of the additional funding at that time had undermined TSA’s ability to meet the deadlines. DOT leadership stated that TSA is especially disadvantaged by operating under a continuing resolution because it does not have money from previous years to help bridge the gaps between programmatic needs and the funding it receives under the continuing resolution. When the Congress created TSA, it required practices consistent with other government initiatives to restructure their agencies in order to instill results-oriented organizational cultures. In the United States and abroad, governments have restructured their agencies to improve the delivery of government services and clarify accountability for results. During the 1980s and 1990s, the Organisation for Economic Co-operation and Development reported that its member countries increased efforts to restructure their public sector organizations for results. Among member countries, restructured organizations represent about 50 percent, sometimes as high as 75 percent, of public expenditure and public servants. In 1988, the United Kingdom started to restructure its government agencies to increase their focus on accountability and improve customer service. Called “executive agencies,” these restructured agencies are still the predominant form of service delivery in the United Kingdom. As of December 2001, there were over 130 executive agencies covering more than three-quarters of the British civil service. In July 2002, the Prime Minister’s Office of Public Services Reform reviewed the performance of these executive agencies and set out to identify management principles that may have contributed to their success. The Prime Minister’s Office concluded that the restructured executive agency model has been a success and the management principles underlying the restructured agencies continue to be highly relevant. These principles are: (1) a clear focus on delivering specified goals within a framework of accountability, (2) responsibility for performance resting clearly with the chief executive and agency staff, and (3) an agency focus that is outward rather than inward. In the 1990s, the Congress recognized the need to restructure federal agencies and to hold them accountable for achieving program results. To this end, the Congress established performance-based organizations (PBOs), modeled after the United Kingdom’s executive agencies: the Office of Student Financial Assistance, United States Patent and Trademark Office, and Air Traffic Organization. Designed in statute, PBOs were to commit to clear management objectives and specific targets for improved performance. These clearly defined performance goals, coupled with direct ties between the achievement of performance goals and the pay and tenure of the head of the PBO and other senior managers, were intended to lead to improved performance. Specifically, the head of the PBO is appointed for a set term, subject to annual performance agreements, and eligible for bonuses for improved organizational performance. Similarly for TSA, the Congress required an Under Secretary to be appointed for a 5-year term to manage TSA who is entitled to a bonus based on performance; measurable goals to be outlined in a 5-year performance plan and reported annually; a performance management system to include individual and organizational goals for managers and employees; an annual performance agreement for the Under Secretary, senior managers, and staff; an oversight board to facilitate communication and collaboration; and public reporting requirements to build citizen confidence. TSA will be 1 of over 20 originating agencies or their components with differing missions, cultures, systems, and procedures that are to move into DHS. The newly created DHS is the most recent manifestation of the continuing consideration of how best to restructure government to respond to the challenges of the 21st century. At a GAO-sponsored forum on mergers and transformation, participants observed that people and cultural issues are at the center of successful mergers and transformations. The importance of these issues should not be avoided, but aggressively addressed at the outset and throughout the process. Within its first year of existence, TSA has made an impressive start in implementing practices that can create a results-oriented organizational culture and help TSA as it begins to take responsibility for the security of the maritime and surface modes of transportation. These practices include leadership commitment to creating a high-performing organization, strategic planning to establish results-oriented goals and measures, performance management to promote accountability for results, collaboration and communication to achieve national outcomes, and public reporting and customer service to build citizen confidence. TSA’s actions and plans to implement the results-oriented practices required in ATSA and recommended next steps that can help TSA build a results-oriented culture are described on the following pages. A critical element and the foundation of TSA’s successful implementation of results-oriented practices will be the demonstrated and sustained commitment of its top leaders. Ultimately, successful organizations understand that they must often change their culture to successfully transform themselves, and that such a change starts with top leadership. Top leadership involvement is essential to overcoming an organization’s natural resistance to change, marshalling the resources needed in many cases to improve management, and building and maintaining the organizationwide commitment to new ways of doing business. At a recent GAO-sponsored roundtable, we reported on the necessity to elevate attention, integrate various efforts, and institutionalize accountability to lead efforts to fundamentally transform an agency and address key management functions at the highest appropriate level in the organization. At TSA, the leadership faces a daunting challenge to create this results-oriented culture. From the outset, this challenge was exacerbated by the change in TSA’s head position, the Under Secretary of Transportation for Security, just 8 months after the organization was established. TSA has continually stated its commitment to becoming a high-performing organization, and has reinforced that commitment in its performance agreements for TSA executives. TSA leadership has expressed its commitment to creating a results- oriented organizational culture. Specifically, in its 180-day action plan report to the Congress outlining goals and milestones for defining acceptable levels of performance in aviation security, TSA stated that it is committed to “being a leading-edge, performance-based organization—an organization whose operative culture establishes performance expectations that support the mission, drives those expectations into organizational and individual performance plans, and collects objective data to assess its performance.” TSA leadership also plans to use the Baldrige performance excellence criteria as a management tool to promote an awareness of quality and performance in TSA. These criteria are: leadership, strategic planning, customer and market focus, information and analysis, human resource focus, process management, and business results. TSA leadership hired a former Baldrige award application examiner to be TSA’s Chief Quality Officer and to head the Office of Quality Performance. According to TSA officials, the Office of Quality Performance will serve as internal consultants to TSA management to help them use the Baldrige criteria as a tool to create a culture focused on performance. To hold TSA’s leadership accountable for achieving results, ATSA requires TSA to establish a performance agreement between the Under Secretary and the Secretary of DOT that includes organizational and individual performance goals. A TSA official told us that as of November 2002, no performance agreement had been finalized for the Under Secretary since the current Under Secretary has been acting in the position. During times of transition, high-performing organizations recognize that performance agreements can reinforce accountability for organizational goals. To this end, when TSA moves into its new parent department, DHS, TSA can use performance agreements to maintain a consistent focus on its goals. ATSA also allows for the Under Secretary to receive a bonus for any calendar year up to 30 percent of the annual rate of pay, based on a performance evaluation. However, TSA’s interim performance management system does not specifically address performance bonuses for the head of TSA. In addition, ATSA requires TSA to establish performance agreements between TSA’s Under Secretary and his or her executives that set organizational and individual performance goals. TSA has created a standardized performance agreement for TSA executives as a part of its interim performance management system. TSA’s executive agreements include both organizational and individual goals, as shown in figure 1. For example, each executive performance agreement includes an organizational goal such as to maintain the nation’s air security and ensure an emphasis on customer satisfaction. The agreement also includes individual goals, such as to meet or exceed requirements for satisfactory performance and to demonstrate commitment to civil rights. In addition, the agreement includes competencies, such as to provide leadership in setting the workforce’s expected performance levels and ensure that the executive’s work unit contributes to the accomplishment of TSA’s mission. TSA can strengthen these performance agreements by setting explicit targets that are directly linked to organizational goals. Governmentwide, to help hold senior executives accountable for organizational results, federal agencies are to establish performance management systems that (1) hold senior executives accountable for their individual and organizational performance by linking performance management with the results-oriented goals of the Government Performance and Results Act (GPRA); (2) evaluate senior executive performance using measures that balance organizational results with customer satisfaction, employee perspectives, and any other measures agencies decide are appropriate; and (3) use performance results as a basis for pay, awards, and other personnel decisions. We have found that progress is needed in explicitly linking senior executive expectations for performance to results-oriented organizational goals and greater emphasis should be placed in fostering the necessary collaboration both within and across organizational boundaries to achieve results. Furthermore, a specific performance expectation to lead and facilitate change could be a critical element as agencies transform themselves to succeed in an environment that is more results oriented, less hierarchical, and more integrated. Establish a performance agreement for the Under Secretary of Transportation for Security that articulates how bonuses will be tied to performance. To hold the Under Secretary accountable for achieving results, DOT, or the new parent department DHS, should create a performance agreement for the Under Secretary that includes organizational and individual goals and also articulates how bonuses for the Under Secretary will be tied to his performance in achieving the goals in the performance agreement. Add expectations in performance agreements for top leadership to foster the culture of a high-performing organization. Successful organizations understand that top leadership performance and accountability are critical to their success and to the success of the federal government’s transformation. TSA can strengthen its current performance agreements for top leadership, including the Under Secretary and senior executives, by adding performance expectations that establish explicit targets directly linked to organizational goals, foster the necessary collaboration within and across organizational boundaries to achieve results, and demonstrate commitment to lead and facilitate change. Strategic planning is a continuous, dynamic, and inclusive process that provides the foundation for the fundamental results the organization seeks to achieve. ATSA’s requirements for TSA are consistent with the results- oriented planning and reporting principles embodied in GPRA. GPRA provides a strategic planning and management framework intended to improve federal performance and hold agencies accountable for achieving results. Effective implementation of this framework requires agencies to clearly establish results-oriented performance goals in strategic and annual performance plans for which they will be held accountable, measure progress towards those goals, determine the strategies and resources to effectively accomplish the goals, use performance information to make the programmatic decisions necessary to improve performance, and formally communicate results in performance reports. Specifically, ATSA requires TSA to submit to the Congress a 5-year performance plan and an annual performance report, but does not specify when these documents are to be submitted to the Congress. TSA has taken the first steps to establishing the performance planning and reporting framework consistent with GPRA. The starting point for the framework envisioned under GPRA is the strategic plan that describes an organization’s mission, outcome-oriented strategic goals, strategies to achieve these goals, and key factors beyond the agency’s control that could impact the goals’ achievement, among other things. According to TSA officials, TSA is currently developing its strategic plan. TSA has, however, made components of its plan public. TSA has articulated its mission, vision, and values. TSA’s mission is to protect the nation’s transportation systems to ensure freedom of movement for people and commerce. TSA’s vision is to continuously set the standard for excellence in transportation security through people, processes, and technologies and its values are integrity, innovation, courtesy and respect, competence, customer focus, dedication, diversity, and teamwork. In addition, TSA has set an overall strategic goal: to prevent intentional harm or disruption to the transportation system by terrorists or other persons intending to cause harm. To support this strategic goal, TSA has defined three performance goals: meeting the ATSA mandates to federalize transportation security, maintaining and improving aviation security, and servicing TSA customers. To demonstrate its progress toward meeting its performance goals, TSA established an initial set of 32 performance measures. For example, TSA’s primary performance measures for its performance goal to maintain and improve aviation security are the percentage of bags screened by explosives detection systems and the percentage of trained screeners. Other measures to complement these primary measures include the percentage of explosives detection systems deployed, the percentage of screeners with 60 hours of on-the-job training completed, and the percentage of screeners compliant with training standards. TSA plans to develop more outcome-oriented goals and measures in fiscal year 2003 and is in the process of finalizing strategies to achieve its goals. To report on its progress in meeting its performance goals and measures, TSA has begun to build the capacity to gather and use organizational performance information. TSA has installed an automated performance management information system, which became operational in April 2002 and is designed to collect and report data on TSA’s performance measures. Data will be collected from federal security directors, security screener supervisors, and headquarters officials and reported through Web-based reports designed for internal decision making and external reporting. According to TSA officials, the system will be expanded to include goals and measures related to all modes of transportation in upcoming fiscal years. As required by ATSA, TSA reported on November 19, 2002, that it submitted its first annual performance report. TSA has linked its aviation security performance goals to those of its parent department, DOT, to provide a clear, direct understanding of how the achievement of its performance goals will lead to the achievement of DOT’s strategic goal for homeland security, as shown in figure 2. Specifically, TSA’s performance goals to federalize and maintain and improve aviation security are intended to contribute to DOT’s performance goal to “reduce vulnerability to crime and terrorism and promote regional stability” and its strategic goal on homeland security, to “ensure the security of the transportation system for the movement of people and goods and support the National Security Strategy.” As TSA establishes its performance goals for other modes of transportation, it should continue to align its goals with DOT’s goals. When TSA moves to DHS, it will be necessary to maintain goal alignment with its new parent department. GPRA requires agencies to consult with the Congress and solicit the views of other stakeholders as they develop their strategic plans. However, TSA has stated few plans to involve stakeholders in its strategic planning process. Such consultations provide an important opportunity for TSA and the Congress to work together to ensure that agency missions are focused, goal are specific and results oriented, and strategies and funding expectations are appropriate and reasonable. Results-oriented organizations also recognize that it is important to broaden stakeholder involvement to create a basic understanding among stakeholders of competing goals. As TSA works to meet its goals, it will continue to face ongoing challenges to balance aviation security against customer service. While TSA needs to screen passengers and baggage carefully to meet its goal to maintain the security of the aviation system, it must efficiently move customers and their baggage through the aviation system to minimize passenger inconvenience to encourage them to continue using air transportation. Establish security performance goals and measures for all modes of transportation as part of a strategic planning process that involves stakeholders. Stakeholder involvement, and specifically congressional consultation, is particularly important for TSA in its strategic planning process given the importance of its mission and the necessity to establish additional goals to address other modes of transportation. In addition, TSA operates in a complex political environment where there will be the ongoing need to balance the sometimes conflicting goals of security and customer service. We identified approaches that can enhance the usefulness of consultations between TSA and the Congress that can also apply to consultations with external stakeholders. Among the approaches are the following. Engage the right people. Including people who are knowledgeable about the topic at hand, such as TSA officials who are knowledgeable about particular transportation modes and specific programs, is important when consulting with the Congress and other stakeholders. Address differing views. Stakeholders may have differing views on what they believe the level of detail discussed during consultation meetings should be. For example, participants may want to engage in discussion that goes beyond TSA’s mission to the appropriate balance between enforcing security and servicing passengers. Establish a consultation process that is iterative. All parties involved in transportation security recognize that the consultation process should be continuous and they should meet as many times as both sides feel are necessary to reach a reasonable consensus on TSA’s strategic and performance goals to address transportation security. Apply practices that have been shown to provide useful information in agency performance plans. Results-oriented organizations focus on the process of performance planning rather than the planning documents themselves. GPRA was intended, in part, to improve congressional decision making by giving the Congress comprehensive and reliable information on the extent to which federal programs are fulfilling their statutory intent. We have identified practices that TSA can apply to ensure the usefulness of its required 5-year performance plan to TSA managers, the Congress, and other decision makers and interested parties. Table 2 outlines these practices. TSA has an opportunity to use its individual performance management system as a strategic tool to drive internal change and achieve external results. TSA, as a new organization, has a critical challenge in (1) integrating potentially more than 60,000 employees into a new organization, (2) creating a common culture, and (3) achieving its security, customer satisfaction, and related performance goals in an effective, efficient, and economical manner. The individual performance management system can be an essential tool in meeting all three of the above. To help agency leaders manage their people and integrate human capital considerations into daily decision making and the program results they seek to achieve, we developed a strategic human capital model. The model highlights the kinds of thinking that agencies should apply, as well as some of the steps they can take, to make progress in managing human capital strategically. In our model, we identify two critical success factors that can assist organizations in creating results-oriented cultures: (1) a “line of sight” showing how unit and individual performance link to organizational goals and (2) the inclusiveness of employees. TSA can apply these factors to its performance management system to help create a results-oriented culture. ATSA requires TSA to establish a performance management system that is to strengthen the organization’s effectiveness by providing for the establishment of goals for managers, employees, and the organization that are consistent with the agency’s performance plan. TSA used the Federal Aviation Administration’s system until it established its own system in July 2002, when TSA leadership approved an interim employee performance management system. The interim system is to remain in place until a permanent system is created and implemented. As of November 2002, TSA had not established a time frame for implementing its permanent performance management system. TSA’s interim system provides specific requirements for planning individual performance, monitoring that performance, determining employee development needs, appraising performance, and recognizing and rewarding performance, as shown in figure 3. For example, at the beginning of the appraisal cycle, employees’ expectations are to be established using a performance agreement; throughout the cycle supervisors are to monitor performance; halfway through the performance cycle supervisors are to provide feedback to employees and identify employee development needs; and at the end of the cycle, supervisors are to appraise performance at two levels: fully satisfactory and unacceptable. Employees may then receive a bonus or other incentive if their performance is at the fully satisfactory level. TSA’s first appraisal cycle ended November 15, 2002. In addition, ATSA requires that TSA’s performance agreements for its employees include individual and organizational goals. These performance agreements can help TSA align individual and organizational goals and establish the line of sight that helps create a results-oriented culture. TSA has created standardized performance agreements for groups of employees including transportation security screeners, supervisory transportation security screeners, supervisors, and executives. These performance agreements include a consistent set of organizational goals, individual goals, and standards for satisfactory performance. Supervisors may customize performance agreements for the individual job by adding additional organizational and individual goals and standards of performance. For example, the standardized performance agreement for security screeners includes two organizational goals: (1) to improve and maintain the security of American air travel by effectively deterring or preventing successful terrorist (or other) incidents on aircraft and at airports, with minimal disruption to transportation and complete service to travelers and (2) to ensure an emphasis on customer satisfaction while maintaining the nation’s air security. In addition, the standardized performance agreement for security screeners includes an individual goal to consistently meet or exceed the basic proficiency requirements by: vigilantly carrying out duties with utmost attention to tasks that will demonstrating the highest levels of courtesy to travelers and working to maximize their levels of satisfaction with TSA services, working as an effective team member at assigned post to ensure that security violations do not get past the team, contributing to the accomplishment of TSA’s mission and vision, behaving in a way that supports TSA’s values, and demonstrating the highest level of concern for the civil rights of coworkers and the traveling public. Finally, the agreement includes standards for satisfactory performance for security screeners. Standards include (1) completing all required training successfully and as scheduled, performing satisfactorily on required proficiency reviews, and passing operational testing satisfactorily and (2) performing security functions in an effective and timely manner in accordance with TSA prescribed guidelines. As described in our strategic human capital model, in addition to and concurrent with the first critical success factor of creating a line of sight showing how unit and individual performance link to organizational goals, successful organizations involve employees to build results-oriented cultures. This critical success factor is especially timely for TSA as it transitions from its interim performance management system and finalizes its permanent system. Particularly when developing a new results-oriented performance management system, leading organizations have found that actively involving employees can build confidence and belief in the system. We reported that when reforming their performance management systems, agencies in other countries consulted a wide range of stakeholders early in the process, obtained feedback directly from employees, and engaged employee unions or associations. Build on the current performance agreements to achieve additional benefits. Successful organizations design and implement performance management systems that align individual employee performance expectations with agency goals so that individuals understand the connections between their daily activities and their organization’s success. While TSA has created standardized performance agreements for groups of employees as a part of its interim performance management system, it can also use its performance agreements to achieve benefits by doing the following. Strengthen alignment of results-oriented goals with daily operations. Performance agreements can define accountability for specific goals and help align daily operations with agencies’ results-oriented programmatic goals. As TSA continues to develop and gain experience with performance agreements, TSA should ensure an explicit link exists between individual performance expectations and organizational goals for all employees. For example, while TSA lists certain competencies for individuals that are related to organizational goals such as demonstrating the highest level of courtesy to travelers, the next step is to set individual targets to meet the organizational goals. Foster collaboration across organizational boundaries. Performance agreements can encourage employees to work across traditional organizational boundaries or “silos” by focusing on the achievement of organizationwide goals. For example, as TSA continues to assume responsibility for security in all modes of transportation, TSA can use employee performance agreements to set expectations that encourage employees to work collaboratively to achieve cross-cutting transportation security goals. Enhance opportunities to discuss and routinely use performance information to make program improvements. Performance agreements can facilitate communication about organizational performance and pinpoint opportunities to improve performance. TSA’s performance management process offers several opportunities to discuss an individual’s performance and how that individual can contribute to TSA’s goals when meeting to set performance expectations, reviewing midyear progress, and assessing performance at year-end. These formal expectation, feedback, and assessment sessions are important to clarify responsibility and accountability. As a next step, TSA can ensure that it uses its performance agreements as a critical component of its performance management process to have on-going, two-way consultations between employees and their supervisors. In other words, strategic performance management—a performance management system that is tied to organizational goals—is not just a once- or twice-a- year formal occurrence, but rather is ongoing and routine. Provide a results-oriented basis for individual accountability. Performance agreements can serve as the basis for performance evaluations. An assessment of performance against the performance agreement can provide TSA and its employees the data needed to better achieve organizational goals. Maintain continuity of program goals during transitions. Performance agreements help to maintain a consistent focus on a set of broad programmatic priorities during changes in leadership and organization. TSA can use its process for developing performance agreements as a tool to communicate priorities and instill those priorities throughout the organization during periods of transition. Ensure the permanent performance management system makes meaningful distinctions in performance. In addition to providing candid and constructive feedback to help individual employees maximize their potential in understanding and realizing goals and objectives of the agency, an effective performance management system provides management with the objective and fact-based information it needs to reward top performers and the necessary information and documentation to deal with poor performers. Under TSA’s interim performance management system, employee performance is appraised at only two levels—fully satisfactory and unacceptable. We have observed that such a pass/fail system does not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. As a next step, TSA should consider appraisal systems with more than two standards of performance. By using its performance agreements as the basis in making distinctions in performance, TSA can have objective and fact-based information and the documentation necessary to have an effective performance management system. Involve employees in developing its permanent performance management system. TSA has the opportunity to create a culture that values the importance of employees to help TSA achieve its goals. Employee involvement improves the quality of the system by providing a front line perspective and helping to create organizationwide understanding and ownership. In addition, even after TSA develops its permanent performance management system, involving employees in the process can help employees perceive that the system is fair. We have identified practices that organizations can apply to further involve employees. The practices TSA can adopt to promote inclusiveness and encourage employee ownership for the permanent performance management system include the following. Seek employee input. Leading organizations not only provide information to employees but also commonly seek their employees’ input on a periodic basis and explicitly address and use that input to adjust their human capital practices. As TSA matures as an organization it can collect feedback using employee satisfaction surveys, focus groups, employee advisory councils, and/or employee task forces. Involve employees in planning and sharing performance information. Involving employees in the planning process to develop agency goals helps to increase employees’ understanding and acceptance of them and improve motivation and morale. For TSA, employees’ understanding and acceptance of its goals is particularly important because they are to be held accountable for achieving the goals set out in their performance agreements. Virtually all of the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. Thus, similar to virtually all federal agencies, TSA must collaborate and communicate with stakeholders within and outside the government to achieve meaningful results, and participate in matrixed relationships—or networks of governmental, private sector, and nongovernmental organizations working together—to achieve its goals. This collaboration and communication will be even more important given the complex nature of national security-related goals. ATSA requires TSA to collaborate and communicate with organizations across the government and in the private sector to accomplish its mission. For example, ATSA requires TSA to do the following. Work with the Federal Aviation Administration to establish procedures for notifying its Administrator and others of the identity of individuals known to pose, or suspected of posing, a risk of air piracy or terrorism or a threat to airline or passenger safety. Enter into memorandums of understanding with federal agencies or other entities to share or otherwise cross-check data on individuals identified on federal agency databases who may pose a risk to transportation or national security. Coordinate with federal agencies and air carriers to require air carriers to use information from government agencies to identify individuals on passenger lists who may be a threat to civil aviation or national security; and if such an individual is identified, notify appropriate law enforcement agencies, prevent the individual from boarding an aircraft, or take other appropriate action with respect to that individual. TSA has established a number of offices to collaborate and communicate with external stakeholders. The Office of Security Regulation and Policy is to coordinate with TSA’s offices and stakeholders on policy, rulemaking, and customer service issues. The Office of Communications and Public Information is to serve as an advisor to senior leadership on the public impacts of major policy decisions, internal audience concerns, and community reaction to and civilian news media interest in TSA missions and functions. The Office of Law Enforcement and Security Liaison is to serve as the national level liaison with federal, state, and local law enforcement and the international community and is to administer TSA’s Freedom of Information Act requirement, which allows the public to request information about TSA policies, procedures, operations, and decisions, among other things and TSA’s Privacy Act requirement, which allows the public to request any records that the government has about the individual making the request. The Office of Legislative Affairs is to be responsible for working and communicating with the Congress. TSA has experienced some challenges with collaboration and communication. According to TSA officials, TSA is still defining and clarifying the specific roles and responsibilities of the offices that are to communicate with stakeholders. As of December 2002, TSA did not have written guidance to provide information about TSA communication roles and responsibilities to other TSA employees or to external stakeholders. In addition, the Under Secretary testified that there were some problems with reaching stakeholders in the past, specifically the airlines and airports. The Under Secretary recognized that collaboration with these and other stakeholders is important to ensure aviation security and made a personal commitment that TSA will make a concerted effort to communicate better with stakeholders. In September 2002, we briefed the staff of the Committee on Transportation and Infrastructure, U.S. House of Representatives, that some officials at selected airports told us that they had not received clear and comprehensive guidance from TSA on issues concerning the feasibility of meeting the baggage screening deadline. TSA officials that we spoke to are aware of the importance of collaboration and communication across the government. According to TSA officials, the primary tools TSA will use to formally collaborate with governmental entities are memorandums of agreement and memorandums of understanding. They are developing memorandums of agreement with the other modal administrations within DOT. TSA officials told us that they also plan to develop memorandums of agreement and memorandums of understanding with local law enforcement agencies, the Department of Defense, the Department of State, the Federal Bureau of Investigation, the Nuclear Regulatory Commission, the Federal Emergency Management Agency, and the Customs Service, among others. They told us that the purposes of the memorandums are to delineate clear lines of authority and responsibility between parties; improve services to DOT’s modal administrations, other federal, state, and local agencies, nongovernmental stakeholders, and the American public; and achieve national performance security goals, among other purposes. TSA plans to complete the memorandums no later than March 1, 2003. As an additional mechanism to facilitate collaboration and communication, ATSA established the Transportation Security Oversight Board. According to ATSA, the Board, which must meet at least every 3 months, should consist of cabinet heads; directors; high-ranking officials and/or their designees from DOT, the Department of Defense, the Department of Justice, the Department of the Treasury, and the Central Intelligence Agency; and presidentially appointed representatives from the National Security Council and the Office of Homeland Security. The Secretary of Transportation is to be the chairperson of the Board. The Board’s responsibilities include, among other things, reviewing transportation plans; facilitating the coordination of intelligence, security, and law enforcement activities affecting transportation; and facilitating the sharing of intelligence, security, and law enforcement information affecting transportation. The Board, established within DOT, met twice in 2002. TSA officials noted that the Board is an excellent mechanism to share information with national security agencies across government and has helped focus the national security community on the threats to the transportation system, which TSA believes is a critical element of meeting the mandates in ATSA. Define more clearly the collaboration and communication roles and responsibilities of TSA’s various offices. To help ensure collaboration and communication with stakeholders are consistent and mutually reinforcing, TSA should more fully define and clarify the collaboration and communication responsibilities of the many offices that interact with its stakeholders. TSA should ensure both internal TSA staff and external stakeholders can identify who is responsible for collaboration and communication at TSA. Formalize roles and responsibilities among governmental entities for transportation security. Finalizing memorandums of agreement and memorandums of understanding with the other modal administrations within DOT as well as other government agencies as appropriate can help TSA successfully manage the necessary matrixed relationships to achieve security in all modes of transportation. For example, agreements between TSA and the modal administrations can address such issues as separating responsibilities for standards and regulations, funding, coordinating with customers, and implementing future security initiatives. Although the memorandums may change when TSA moves to DHS, TSA should continue to make progress to formalize its roles and responsibilities until the transition takes place. Federal agencies can promote greater transparency of government by publicizing what they intend to achieve and by being held accountable for achieving those results. Such transparency can improve the confidence of the American people in the capabilities of the federal government. Improving public confidence is especially critical for TSA as it works to achieve its goals of improving transportation security. ATSA required TSA to issue specific reports to the Congress on its activities and progress in establishing and meeting its goals. Specifically, ATSA required TSA to provide to the Congress, within 180 days of enactment of the legislation, an action plan with goals and milestones that was to outline how acceptable levels of performance for aviation security will be achieved. In accordance with the time frames outlined in ATSA, TSA submitted the action plan to the Congress on May 19, 2002, and has made this report available to the public on its Web site. The action plan, entitled “Performance Targets and Action Plan, 180 Day Report to Congress,” made public TSA’s strategic and performance goals, TSA’s performance measures, and the performance measurement information system. ATSA also required two progress reports within 6 months of the enactment of the legislation. TSA released these reports within the established time frame. The first progress report was to describe TSA’s progress to date on the evaluation and implementation of actions listed in the legislation. TSA submitted this progress report, entitled “Report to Congress on Enhanced Security Measures,” on May 19, 2002, and made the report available to the public on its Web site. Some of the actions TSA reported it was evaluating include the following. Establish a uniform system of identification for all state and local law enforcement personnel for use in obtaining permission to carry weapons in aircraft cabins and in obtaining access to a secured area of an airport. Establish requirements to implement trusted passenger programs and use available technologies to expedite the security screening of passengers who participate in such programs. Provide for the use of technologies to enable the private and secure communication of threats to aid in the screening of passengers and other individuals on airport property who are identified on any state or federal security-related database for the purpose of having an integrated response of various authorized airport security forces. The second progress report was to describe the deployment of passenger and baggage screening systems. TSA submitted this report on May 18, 2002, and has made nonsensitive portions of the report available on its Web site. The report, entitled “Deployment of Screening Systems Strategy & Progress,” provided the Congress with TSA’s progress on and strategy for meeting the mandated deadlines to deploy federal screeners at security checkpoints at 429 airports and to have systems in place for screening every piece of checked baggage for explosives. For example, TSA reported that at that time it had identified security screener standards; selected private contractors to recruit, assess, and train security screeners; developed a preliminary plan for deploying federal screeners to the airports; developed an initial screening checkpoint design; and reviewed available and emerging explosives detection technology. The report did not include all of the information required in ATSA. For example, ATSA required specific information such as the dates of installation of each system to screen all checked baggage for explosives and the date each system is operational. Since TSA has issued the statutorily required action plan and progress reports, it has continued to publicly report on its progress. Specifically, TSA created a Web site that provides information for customers and the public, including updates on its progress toward meeting the deadlines in ATSA; speeches, statements, and testimonies by TSA and DOT leadership; information on aviation security technology such as explosives detection systems; fact sheets on TSA contractors; frequently asked questions related to TSA’s policies and procedures; information for the traveling public; and information on employment opportunities with TSA. For example, a private citizen could find out when TSA would be hiring security screeners at her or his local airport, how to apply for a position with TSA, and what objects are prohibited and permitted to be carried onto an airplane. In addition, TSA created an Office of Communications and Public Information. The purpose of this office is to provide information to the general public concerning TSA, its people, programs, policies, and events. To facilitate this mission, the Office of Communications and Public Information maintains a call center to receive and respond to inquiries from the public. This office also performs a variety of other functions. For example, the office develops statements, position papers, policy releases, media alerts, and marketing plans to inform and educate the public. TSA has taken several actions that are intended to focus on customer satisfaction and be responsive to customer concerns in delivering critical and sensitive services. TSA established an ombudsman position to, among other things, serve external customers. Specifically, TSA’s ombudsman is responsible for recommending and influencing systemic change where necessary to improve TSA operations and customer service. As of November 2002, TSA is recruiting to fill this position. We have reported that through the impartial and independent investigation of citizens’ complaints, federal ombudsmen help agencies be more responsive to the public, including people who believe their concerns have not been dealt with fairly or fully through normal channels. Ombudsmen may recommend ways to resolve individual complaints or more systematic problems, and may help to informally resolve disagreements between the agency and the public. In addition, TSA is tracking performance on its customer service. For example, TSA’s primary performance measure for customer satisfaction is the average wait time and percentage of passenger complaints per 1,000 passengers. Other measures to gauge customer satisfaction include the percentage of flights delayed due to security issues, the percentage of incidents/interventions per 1,000 passengers, the number of weapons seized per 1,000 passengers, and the number of seats delayed due to security issues, among others. As part of its ongoing challenge to balance security against customer service, TSA is reviewing existing security procedures in order to eliminate those that do not enhance security or customer service. For example, the Under Secretary testified that TSA has recently eliminated two procedures to reduce customers’ “hassle factor” at airports. In August 2002, TSA allowed passengers to carry beverages in paper or foam polystyrene containers through walk-through metal detectors and prohibited screeners from asking passengers to drink or eat from any containers of food or liquid as a security clearance procedure. TSA also eliminated the requirement for the airlines to ask a series of security-related questions to customers at check-in. In addition, TSA recently lifted the existing rule that prohibits parking within 300 feet of airport terminals. TSA has replaced this rule with parking security measures specific to each airport and linked to the national threat level. Lastly, TSA is also planning to create a customer satisfaction index (CSI) for aviation operations, which includes collecting customer information from national polls, passenger surveys at airports, the TSA call center, and customer feedback at airports. TSA intends to use data from the CSI to improve performance. As a first step, TSA conducted 12 focus groups with air travelers to help it understand the aspects of customer experiences that influence satisfaction with and confidence in aviation security. TSA learned from the focus groups that: the federalization of aviation security significantly increased the key attributes that drive increased satisfaction and confidence include attentiveness of screeners, thoroughness of the screening process, professionalism of the workforce, and consistency of the process across airports; wait time was not a significant driver of satisfaction, and participants said they would be willing to wait longer if they thought it would make them more secure; the lack of checked baggage screening reduces confidence; and secondary screening processes are a significant driver of reduced satisfaction. The results of the focus groups will help TSA develop the passenger surveys to be used to collect data for the CSI. TSA intends to implement the CSI for aviation operations in 2003 and to expand the CSI to include other stakeholders, such as airport operators, air carriers, and customers of other modes of transportation. Fill the ombudsman position to facilitate responsiveness of TSA to the public. To ensure TSA is as responsive to the public as possible and is able to identify and resolve individual complaints and systematic problems, TSA should fill its ombudsman position as soon as high quality candidates can be identified. Continue to develop and implement mechanisms, such as the CSI, to gauge customer satisfaction and improve customer service. TSA has identified customer satisfaction as one of its three annual performance goals. By combining data on its service delivery from a number of sources, such as the CSI, TSA will be able obtain a robust picture of customer satisfaction, which can be used to improve performance. TSA should complete the planning and developing of the CSI and begin its implementation. Never has a results-oriented focus been more critical than today, when the security of America’s citizens depends so directly and immediately on the results of many federal programs. TSA has faced immense challenges in its first year of existence to build the infrastructure of a large organization and meet mandated deadlines to federalize aviation security. As TSA begins to take responsibility for security in the maritime and surface modes of transportation, its current and future challenge is to build, sustain, and institutionalize the organizational capacity to help it achieve its current and future goals. As TSA moves into the newly created DHS, TSA has an opportunity to continue to foster a results-oriented culture. In this regard, TSA has started to put in place the foundation of this results-oriented culture through leadership commitment to creating a high-performing organization, strategic planning to establish results-oriented goals and measures, performance management to promote accountability for results, collaboration and communication to achieve national outcomes, and public reporting and customer service to build citizen confidence. This foundation can serve TSA well in DHS and help TSA to focus on and achieve its mission to protect the nation’s transportation systems to ensure freedom of movement for people and commerce. We recommend that the Secretary of Transportation, in conjunction with the Under Secretary of Transportation for Security, continue TSA’s leadership commitment to creating a high-performing organization that includes next steps to establish a performance agreement for the Under Secretary that articulates how bonuses will be tied to performance and to add expectations in performance agreements for top leadership to foster the culture of a high-performing organization. We recommend that the Under Secretary of Transportation for Security take the next steps to continue to implement the following results-oriented practices. Strategic planning to establish results-oriented goals and measures that includes next steps to establish security performance goals and measures for all modes of transportation as part of a strategic planning process that involves stakeholders and to apply practices that have been shown to provide useful information in agency performance plans. Performance management to promote accountability for results that includes next steps to build on the current performance agreements to achieve additional benefits, to ensure the permanent performance management system makes meaningful distinctions in performance, and to involve employees in developing its performance management system. Collaboration and communication to achieve national outcomes that includes next steps to define more clearly the collaboration and communication roles and responsibilities of TSA’s various offices and to formalize roles and responsibilities among governmental entities for transportation security. Public reporting and customer service to build citizen confidence that includes next steps to fill the ombudsman position to facilitate responsiveness of TSA to the public and to continue to develop and implement mechanisms, such as the CSI, to gauge customer satisfaction and improve customer service. We provided drafts of this report in December 2002 to officials from DOT, including TSA, for their review and comment. TSA’s Director of Strategic Management and Analysis provided oral comments on behalf of DOT and TSA generally agreeing with the contents, findings, and recommendations of the draft report. TSA’s Director of Strategic Management and Analysis provided minor technical clarifications and we made those changes where appropriate. In addition, she provided updated information on TSA’s progress in its strategic planning, collaboration and communication, and customer service since the completion of our audit work. We added that information where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days from the date of this letter. At that time, we will provide copies of this report to the Secretary of Transportation, the Under Secretary of Transportation for Security, the Director of the Office of Homeland Security, the Director of the Office of Personnel Management, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me or Lisa Shames on (202) 512-6806. Marti Tracy was a key contributor to this report. The objective of our review was to describe the Transportation Security Administration’s (TSA) actions and plans for implementing the results- oriented practices required in the Aviation and Transportation Security Act (ATSA) and recommend next steps for TSA to build a results-oriented organizational culture and to establish a positive control environment. To identify results-oriented practices and recommend next steps, we reviewed our models, guides, reports, and other products on strategic planning and performance measurement, strategic human capital management, transformation efforts, and other related areas. See related GAO products listed at the end of this report for a list of our products in these areas. We next analyzed ATSA in relation to our products to identify any results- oriented practices that were statutorily required in the legislation. In addition, we reviewed TSA and Department of Transportation missions, performance goals and measures, performance agreements, policies and procedures, and organizational charts and other relevant documentation. To describe TSA’s status in implementing these results-oriented practices, we interviewed 25 officials from various TSA offices including strategic planning, human capital, training, budget, public affairs, and policy, among others. We also visited Baltimore-Washington International airport after it was transitioned to federal control to talk to front-line managers about their responsibilities and specifically their role in providing performance data to headquarters. We developed the recommended next steps by referring to our models, guides, reports, and other products on results- oriented practices and identifying additional practices that were associated with and would further complement or support current TSA efforts. We performed our work from May through September 2002 in accordance with generally accepted government auditing standards. The following list provides information on recent GAO products related to the Transportation Security Administration (TSA), transportation security, and the results-oriented practices discussed in this report. These and other GAO products can be found at www.gao.gov. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Never has a results-oriented focus been more critical than today, when the security of America's citizens depends on the outcomes of many federal programs. In response to the September 11 terrorist attacks, the Congress passed the Aviation and Transportation Security Act (ATSA) that created the Transportation Security Administration (TSA) and made it responsible for transportation security. ATSA requires TSA to implement specific practices that are intended to make it a results-oriented organization. In its first year, TSA has simultaneously started to build the infrastructure of a large organization as it focused primarily on meeting its aviation security deadlines. As TSA begins to take responsibility for security in the maritime and surface modes of transportation, its current and future challenge is to continue to build, sustain, and institutionalize the organizational capacity to help it achieve its current and future goals. In this regard, TSA has made an impressive start in implementing practices that can create a results-oriented culture. These practices--leadership commitment, strategic planning, performance management, collaboration and communication, and public reporting and customer service--are shown below. Such practices are especially important when TSA moves into the newly created Department of Homeland Security.
Railroads are the primary mode of transportation for many products, especially for such bulk commodities as coal and grain. Yet by the 1970s, American freight railroads were in a serious financial decline. Congress responded by passing the Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980. These acts reduced rail regulation and encouraged greater reliance on competition to set rates. Railroads have also continued to consolidate (through such actions as mergers, purchases, changes in control, and acquisitions) to reduce costs, increase efficiencies, and improve their financial health. The 1976 act limited the authority of the Interstate Commerce Commission (now the Surface Transportation Board) to regulate rates to instances in which there is an absence of effective competition—that is, where a railroad is “market dominant.” The 1980 act made it federal policy to rely, where possible, on competition and the demand for rail services (called demand-based differential pricing) to establish reasonable rates. Differential pricing recognizes that inherent in the rail industry cost structure are joint and common costs that cannot be attributed to particular traffic. Under demand-based differential pricing, railroads recover a greater proportion of these unattributable costs from rates charged to those with a greater dependency on rail transportation. Among other things, the 1980 act also (1) allowed railroads to market their services more effectively by negotiating transportation contracts (generally offering reduced rates in return for guaranteed volumes) containing confidential terms and conditions; (2) limited collective rate-setting to those railroads actually involved in a joint movement of goods; and (3) permitted railroads to change their rates without challenge in accordance with a rail cost adjustment factor. Furthermore, both acts required the Interstate Commerce Commission to exempt railroad transportation from economic regulation in certain instances. The Staggers Rail Act required exemptions where regulation is not necessary to carry out rail transportation policy and where a transaction or service is of limited scope, or where regulation is not needed to protect shippers from an abuse of market power. During the 1980s and 1990s, railroads used their increased pricing freedoms to improve their financial health and competitiveness. In addition, the railroad industry has continued to consolidate in the last 2 decades to become more competitive by reducing costs and increasing efficiencies. (This consolidation continues a trend that has been occurring since the nineteenth century.) In 1976, there were 30 independent Class I railroad systems, consisting of 63 Class I railroads. (Class I railroads are the nation’s largest railroads.) Currently there are 7 railroad systems, consisting of 8 Class I railroads. Half of that reduction was attributable to consolidations. The 8 Class I railroads are the Burlington Northern and Santa Fe Railway Co.; CSX Transportation, Inc.; Grand Trunk Western Railroad, Inc.; Illinois Central Railroad Co.; Kansas City Southern Railway Co.; Norfolk Southern Railroad Co.; Soo Line Railroad Co., and Union Pacific Railroad Co. The Surface Transportation Board is the industry’s economic regulator. The board is a decisionally independent adjudicatory agency administratively housed within the Department of Transportation. Among other things, board approval is needed for market entry and exit of railroads and for railroad mergers and consolidations. The board also adjudicates complaints concerning the quality of rail service and the reasonableness of rail rates. Under the ICC Termination Act of 1995, the board may review the reasonableness of a rate only upon a shipper’s complaint. Moreover, the board may consider the reasonableness of a rate only if (1) the revenue produced is equal to or greater than 180 percent of the railroad’s variable costs for providing the service and (2) it finds that the railroad in question has market dominance for the traffic at issue. If the revenue produced by that traffic equals or exceeds the statutory threshold, then the board examines intramodal and intermodal competition to determine whether the railroad has market dominance for that traffic and, if so, whether the challenged rates are reasonable. From 1997 through 2000, there were two periods during which major portions of the rail industry experienced serious service problems. The first began in July 1997, during implementation of the Union Pacific Railroad and Southern Pacific Transportation Company merger. As a result of aging infrastructure in the Houston, Texas, area that was inadequate to cope with a surge in demand, congestion on this system began affecting rail service throughout the western United States. Rail service disruptions and lengthy shipment delays continued through the rest of 1997 and into 1998. The board issued a series of decisions that generally were designed to enhance the efficiency of freight movements by changing the way rail service is provided in and around the Houston area. These decisions principally focused on the Houston/Gulf Coast area and included an emergency service order to address the service crisis. In addition, CSX Transportation and Norfolk Southern Corporation began experiencing service problems in the summer and early fall of 1999, shortly after they began absorbing their respective parts of the Consolidated Rail Corporation (Conrail). These service problems caused congestion and shipment delays, primarily in the Midwest and Northeastern parts of the country. By early 2000, those service problems had largely been resolved without formal board action. Rail rates generally have continued to fall nationwide for the commodities we studied and in the specific markets we reviewed. However, in several markets rates either increased over the 4-year period for certain commodities or increased and then later fell, resulting in an overall decrease for the period. There may be a variety of reasons why rail rates change over time, including increases or decreases in production or export of various commodities (such as coal or grain); changes in railroad costs; changes in use of contracts that tie rates to specific volumes of business; service problems that could affect the ability of railroads to supply railcars, crews, and locomotive power to meet the demand for rail transportation; or the degree of competition. We do not attempt to identify and explain all the various reasons for changes in the rail rates we examined. Rather, our aim is to put rate changes for particular commodities into context with some of the economic or rail industry conditions that might have affected them from 1997 through 2000. Rates for coal, grain (wheat and corn), chemicals (potassium and sodium compounds and plastic materials or synthetic fibers, resins, or rubber), and transportation equipment (finished motor vehicles and motor vehicle parts or accessories) generally fell from 1997 through 2000. (See fig. 1.) These decreases followed the general trend we previously reported on for the 1990–1996 period and, as before, tended to reflect railroad cost reductions brought about by continuing productivity gains in the railroad industry that have allowed railroads to reduce rates in order to be competitive. From 1997 through 2000, the rates for coal decreased slightly but steadily from about 1.5 cents per ton-mile to about 1.4 cents per ton-mile. Coal production fluctuated over this period but generally decreased from about 1.12 billion tons in 1998 to about 1.08 billion tons in 2000. The production of coal shipped for export also generally decreased from about 83.5 million tons in 1997 to 58.5 million tons in 2000. The Energy Information Administration attributed these decreases to, among other things, a draw down in coal stocks by utilities and reluctance on the part of some coal producers to expand production. Lower demand for rail transportation resulting from lower production generally results in lower rail rates. However, the demand for rail transportation (and consequently rail rates) can also be affected by changes in coal held as inventory and other supply- related factors. Board officials suggested that the decrease in coal rates during this period could also be attributed in part to increasing competition between low-sulfur Powder River Basin coal from the West and higher- sulfur Eastern coal, and to the expiration and resulting renegotiation of many long-term coal transportation contracts. The rates for wheat increased slightly from 1997 to 1998—from about 2.46 cents per ton-mile in 1997 to about 2.47 cents per ton-mile—before falling back in 1999 and 2000 to just under 2.4 cents per ton-mile. Rates for wheat may have decreased because overall production decreased, from 67.5 million tons for the 1997–1998 season to 60.5 million tons for the 2000–2001 season, despite a modest increase in demand for exports (from 28.1 million tons to 30.0 million tons over the same period). Preliminary information indicates that in 1998, the most recent year for which data were available, railroads transported over half (about 55 percent) of all wheat shipments. Corn rates generally decreased from about 2 cents per ton-mile in 1997 to about 1.8 cents per ton-mile in 2000. Corn production fluctuated between 1997 and 1999 (the latest year for which data are available), increasing from 9.2 million bushels in 1997 to 9.8 million bushels in 1998 before falling back to 9.4 million bushels in 1999. However, the domestic use of corn (the primary use of corn) increased by about 4 percent—from 7.3 million bushels in 1997 to 7.6 million bushels in 1999. This increase suggests, all else being equal (including rail costs), greater demand for transportation and possibly higher rail rates. Yet, rail rates for corn are influenced by a number of factors. Significant amounts of corn are produced in areas accessible to navigable waterways and, therefore, the transportation of corn is less dependent on rail. (About 25 percent of corn was shipped by rail in 1998, the latest year for which data are available.) In addition, rates may be affected by the supply of corn. From 1997 through 1999 (the latest year for which data are available) the total supply of corn increased from 10.1 million bushels to 11.2 million bushels. It is possible that intermodal competition, increased domestic use of corn, and an increasing supply of corn may have all influenced rail rates for corn. The rates for chemicals (as illustrated by rates for potassium/sodium and plastics) decreased slightly from 1997 through 2000 at a steady rate. According to data from the American Chemistry Council, the production of chemicals in the potassium/sodium classification increased between 1997 and 1999. Plastics production also steadily increased over the period. These data suggest that, all things being equal, rail rates should have increased over the period because of a higher demand for rail transportation. However, over 65 percent of chemicals are transported less than 250 miles, a distance that is truck competitive, which may indicate that railroad rates are sensitive to truck competition. In addition, not all chemicals that are produced require immediate transportation. An official with the American Chemistry Council told us that chemical manufacturers often produce a product, load it onto railcars, and store the railcars until the product is sold, at which point it is transported to destination. Although the tonnage of chemicals shipped by rail generally increased between 1997 and 2000, railroads accounted for only 20 percent of the tonnage transported in 2000. This is up slightly from the 19 percent transported in 1997. Rates for motor vehicles and parts also generally decreased over the 4-year period, but not at a steady rate. This occurred during a time when U.S. car and truck production generally fluctuated between 12 million and 13 million units. Car production, in particular, generally decreased over the period from about 5.9 million units to about 5.5 million units, according to Crain Communications, Inc., a publisher of Automotive News. The automotive industry is heavily dependent on railroads, and the Association of American Railroads—a railroad trade group—estimates that railroads transport about 70 percent of finished motor vehicles. Automotive production declines, among other things, might have contributed to generally decreasing rail rates. Data on auto parts production were not available. In its own study, the board found that the average, inflation-adjusted rail rate had continued a multi-year decline in 1999 and that, since 1984, real rail rates had fallen 45 percent. It found that real rail rates had decreased for both eastern and western railroads. According to the board, the results of its study implied that, although railroads retain a degree of pricing power in some instances, nearly all productivity gains achieved by railroads since the 1980s (when railroad economic regulation was reduced) have been passed on to rail customers in the form of lower rates. The board estimated that rail shippers would have paid an additional $31.7 billion for rail service in 1999 if revenue per ton-mile had remained equal to its 1984 inflation-adjusted level. The board acknowledged, however, that even though real rail rates had decreased overall, individual rates might have increased and, further, that some rail customers might feel disadvantaged if their rates did not fall to the same extent as their competitors’ rates. Our analysis of rail rates for coal, grain (corn and wheat), chemicals (potassium, sodium, plastics, and resins), and motor vehicles and motor vehicle parts in selected high-volume transportation markets generally showed that rates continued to decrease from 1997 through 2000. However, this was not true in all markets. Rail rates may have been sensitive to competition, and rail rates were generally higher in areas considered to have less railroad-to-railroad competition. Real rail rates for coal, although fluctuating in some markets, generally decreased from 1997 through 2000. In virtually every market we analyzed—both in the East (Appalachia) and in the West (Powder River Basin)—rates decreased. For example, on a medium-distance route from Central Appalachia to Orlando, Florida, rates decreased from about 2.2 cents per ton-mile in 1997 to 1.7 cents per ton-mile in 2000. (See fig. 2.) The 2000 rate was also substantially less than the rate of 2.6 cents per ton- mile in 1990. Competition may have played a role in the decrease in coal rates that we examined. In the West, the two Class I railroads that served the Powder River Basin during the 1990–1996 period, the Burlington Northern and Santa Fe Railway and the Union Pacific, continued to serve the market from 1997–2000. In the East, three Class I railroads served Central Appalachia until mid-1999: Conrail, CSX Transportation, and Norfolk Southern. Following its acquisition by the latter two carriers, Conrail began being absorbed into CSX Transportation and Norfolk Southern in June 1999, with the latter two carriers continuing to serve the market. As part of this transaction, certain areas of Pennsylvania and West Virginia (part of the Appalachia Coal Supply Region) that had been served exclusively by Conrail, although conveyed to Norfolk Southern, are available to CSX on an equal-access basis for 25 years, subject to renewal. Finally, rail rates for coal can be influenced by coal production as well as existing supplies of coal. In general, coal production in the Appalachian area decreased from 1997 to 2000—from about 468 million tons to about 421 million tons. On the other hand, coal production in the Western region (which includes the Powder River Basin) increased between 1997 and 1999—from about 451 million tons to about 512 million tons—before falling back to 510 million tons in 2000. In its 2000 review, the Energy Information Administration noted that coal production in Wyoming (which dominates coal production in both the West and the United States) was driven higher by an increasing penetration of Powder River Basin coal into Eastern markets—an action creating competition for coal produced in the East. Board officials told us that in order for Powder River Basin coal to penetrate Eastern markets, railroads have had to offer very low transportation rates. In addition, they suggested that rail rates for Powder River Basin coal are lower than rail rates for Appalachian coal because of the ability of railroads to use larger (110-car unit) trains to pick up the coal and because of more favorable terrain (flatter and straighter routes) to transport the coal from the mines. Coal supply (as measured by year-end coal stocks) generally fluctuated over the 1997 through 2000 period— increasing from about 140 million tons in 1997 to 183 million tons in 1999, before falling back to 142 million tons in 2000. From 1997 through 2000, real rail rates for shipments of wheat and corn generally stayed the same or decreased for the markets that we reviewed. For example, wheat shipments moving over medium-distance (501 miles to 1,000 miles) routes generally followed this pattern. (See fig. 3.) The exception was wheat shipped from the Oklahoma City, Oklahoma, economic area to the Houston, Texas, economic area. On this route, rail rates generally increased by 12 percent—from 1.9 cents per ton-mile in 1997 to 2.2 cents per ton-mile in 2000. The largest increase occurred between 1997 and 1998, when rates went from 1.9 to 2.1 cents per ton-mile. This increase came at about the same time as the service crisis in the Houston/Gulf Coast area that delayed the delivery of railcars and, in some cases, halted freight traffic. Although board officials did not think railroads used rail rates to allocate the supply of railcars during this time, such an action could have occurred for particular commodities on particular routes. The increases also came at the same time as wheat production in Oklahoma rose from about 170 million bushels in 1997 to just under 200 million bushels in 1998. This may be consistent with an increase in the handling of bulk grain by the Port of Houston Authority between 1997 and 1998, from about 388,000 tons to 1.2 million tons. These factors may also have contributed to a general increase in rail rates for these movements. Even with these increases, the rail rate in 2000 was still less than it was in 1990—about 2.2 cents per ton-mile in 2000, as compared with 2.5 cents per ton-mile in 1990. Rail rates for wheat from the northern plains locations of the Great Falls, Montana, and Grand Forks, North Dakota, economic areas on medium- distance routes generally decreased over the period. Wheat production and demand for rail transportation may have been influencing factors. Although the volume of export wheat was increasing over the 1997 to 2000 period, wheat production in various states fluctuated. For example, wheat production in Montana steadily declined between 1997 and 2000, from about 182 million bushels to about 154 million bushels. In contrast, wheat production in North Dakota (the second highest wheat producing state behind Kansas in 2000) fluctuated between about 240 million bushels and 315 million bushels, alternately increasing and decreasing beginning in 1997. Whether wheat is transported or not depends on many factors, including the price of wheat and the amount of carryover stocks from year to year. In 2001, the U.S. Department of Agriculture reported that grain car loadings on railroads had steadily decreased over the previous 5 years, with the exception of 1999. This was attributed, at least partially, to farmers holding on to grain because of large harvests, large carryover stocks, and low prices. Rate trends between 1997 and 2000 for the shipment of corn were similar to those for wheat. Again, rate trends for corn can be illustrated in the rail rates for medium-distance routes. (See fig. 4.) The rates for most of these routes generally either stayed about the same or decreased over the period. Similar patterns are seen in the other distance categories. However, some rail rates on short-distance routes increased between 1999 and 2000. This was particularly true for corn shipments within the Minneapolis, Minnesota, economic area, where rates went from about 3.5 cents per ton- mile in 1999 to about 4.2 cents per ton-mile in 2000. The specific reasons for this increase are not clear. Corn production in Minnesota generally decreased during this period, from about 990 million bushels in 1999 to about 957 million bushels in 2000. However, in November 1999, the U.S. Department of Agriculture reported that, while corn production and exports were expected to decrease, the domestic use of corn was expected to remain strong, and that domestic use of corn was heavily dependent on rail and truck transportation. Other than livestock feed, domestic use of corn includes corn sweeteners (used in the soft drink industry) and ethanol (a fuel additive). Minnesota also has an active livestock industry, and the state ranked third highest in the country in the number of hogs and pigs produced and hogs marketed in 1999 (behind Iowa and North Carolina). Rail rates for wheat and corn shipments appear to be sensitive to both inter- and intramodal competition. As shown in figure 3, rates for wheat shipments from the Duluth, Minnesota, to the Chicago, Illinois, economic areas—a potential Great Lakes water competitive route—continued to be between 0.72 cents to just under 2 cents per ton-mile lower in 2000 than rates on other medium-distance routes we examined that potentially had fewer transportation alternatives (for example, shipments from Great Falls). In addition, as shown in figure 4, rail rates for corn shipments from the Chicago and Champaign, Illinois, economic areas to the New Orleans, Louisiana, economic area—potentially barge-competitive routes—were substantially lower (up to 1.7 cents per ton-mile in 2000) than rates on other medium-distance corn routes we examined that potentially had fewer transportation choices (for example, shipments from the northern plains states). Sensitivity to intramodal (railroad-to-railroad) competition also continued to be evident. For example, rail rates from 1997 through 2000 for wheat shipments originating in the Wichita, Kansas, and Oklahoma City, Oklahoma, economic areas were about 1.4 cents per ton-mile lower than rail rates for wheat shipments from the Great Falls economic area to the Portland, Oregon, economic area over the same period. The central plains area is considered to have more railroad competition than the northern plains area. Shipment size can also influence railroad costs and, therefore, rates. Loading more cars at one time increases efficiency and reduces a railroad’s costs. From 1997 through 2000, the average shipment size for wheat continued to be higher in the central plains than in the northern plains. For example, the average shipment size for wheat from the Wichita economic area from 1997 through 2000 was about 88 railcars, as compared with about 43 railcars for wheat shipments from the Great Falls economic area. In both instances, the average shipment size increased in the 1997 through 2000 period as compared with the 1990 through 1996 period—by about 17 railcars for wheat shipments from the Wichita area (from about 71 railcars to about 88 railcars) and by about 5 railcars for wheat shipments from the Great Falls area (from about 38 railcars to about 43 railcars). As discussed above, rates in the central plains states were typically lower than those in the northern plains states for the routes we examined. Real rail rate changes for chemical and transportation equipment (motor vehicles and motor vehicle parts) shipments were mixed for the 1997 through 2000 period for the markets we reviewed—some rates fell while others stayed the same or increased. These trends can be seen in short- distance (500 miles or less) shipments of plastics. (See fig. 5.) Two of the more notable trends are shipments within the Beaumont, Texas, and Lake Charles, Louisiana, economic areas. In the Beaumont economic area, real rail rates increased from 42.6 cents per ton-mile in 1997 to 55.8 cents in 1998 before falling to 29.1 cents in 2000. In the Lake Charles economic area, rail rates increased from 25.9 cents per ton-mile in 1996 to 29.7 cents per ton-mile in 1997 before falling (by about 78 percent) to 6.5 cents per ton-mile in 1998. After increasing again in 1999, the rates decreased to 4.8 cents per ton-mile in 2000 on this route. Rates in the other markets generally stayed about the same or decreased. While it is not clear why these rates changed the way they did, the changes came at the time (1997– 1998) of a severe service crisis in the Houston/Gulf Coast area. Board officials said that generally, in their view, it did not appear that railroads used rail rates to allocate resources during the service crisis; they suggested that the erratic nature of the year-by-year rate changes reported for certain of these intra-terminal movements (which, according to the board, tend to be small shipment sizes) may have been related to the heterogeneous nature of this chemicals traffic and to the low sampling rates for smaller shipment sizes—1 in 40 waybills for movements of 1 to 2 car shipments, and 1 in 12 waybills for 3 to 15 car shipments—in the stratified Carload Waybill Sample. Real rail rates for shipments of finished motor vehicles and motor vehicle parts or accessories also showed a variety of trends. In 1999, we reported that one of the more dramatic changes in rates was for the transportation of finished motor vehicles from Ontario, Canada, to Chicago. (See fig. 6.) The rates on this route decreased about 40 percent between 1990 and 1996. Since that time, the rates on this route have largely stabilized at about 12 cents per ton-mile, with a slight increase between 1997 and 2000. In general, rail rates for the transportation of motor vehicle parts or accessories on both long- and medium-distance routes decreased. The notable exception is rates for the transportation of motor vehicle parts or accessories between the Detroit, Michigan, and Dallas, Texas, economic areas. On this route, the rates generally increased from about 9 cents per ton-mile in 1997 to about 22 cents per ton-mile in 2000—about a 139 percent increase. Most traffic in motor vehicles and motor vehicle parts or accessories is either under contract or exempt from economic regulation. Use of contracts suggests that rate decreases may be related to price discounts offered in return for guaranteed volumes of business. However, board officials noted that in recent years, railroads have increasingly been offering motor vehicle manufacturers service packages in which railroads provide premium service for higher rates. This may account for rate increases on specific routes. Between 1997 and 2000, the proportion of all railroad revenue that came from shipments transported at rates that generated revenues exceeding 180 percent of variable costs stayed relatively constant at just under 30 percent. (See fig. 7.) This result is about 2 percentage points less than the average for the 1990–1996 period. In addition to being a jurisdictional threshold for the board to review the reasonableness of rates, revenue-to-variable cost ratios are sometimes used as indicators of shippers’ captivity to railroads. If used in this way, the higher the R/VC ratio, the more likely it is that a shipper can use only rail to meet its transportation needs. Individual commodity results differed markedly. In 2000, 62 percent of chemicals (which include potassium, sodium, and plastics) and 42 percent of coal were transported at rates generating revenues exceeding 180 percent of variable costs. However, only 17 percent of transportation equipment (which includes motor vehicles and motor vehicle parts or accessories) and 32 percent of farm products (which includes wheat and corn) were transported at rates above this level. Board officials suggested that the comparatively high and rising R/VC ratios for chemicals traffic is likely attributable in part to the fact that the railroads’ greater liability exposure associated with transporting hazardous materials is not reflected in the costs attributable to this traffic under the board’s rail costing system. Board officials told us that higher rail rates for transporting hazardous chemicals are reflected in higher revenues for a railroad. However, additional costs incurred because of the higher liability exposure (such as court judgments against a company and set asides for future claims) are shown as special or extraordinary charges that do not become part of the variable costs of a movement in the board’s rail costing system. In contrast to the fairly constant overall proportion of goods shipped with revenues exceeding 180 percent, the results for four broad classes of commodities decreased or increased noticeably. For example, the proportion of farm products transported at above 180 percent R/VC increased from 23 percent to 32 percent from 1997 through 2000, following an increase from 1990 to 1994 (from 22 to 32 percent) and a decline from 1994 to 1996 (from 32 to 23 percent). The proportion of coal shipped above this ratio decreased from 50 percent to 42 percent from 1997 through 2000, continuing a gradual overall decrease from 1990. In some instances, the average R/VC ratios for the 1997–2000 period were considerably higher or lower than the average R/VC ratios for the 1990– 1996 period. For example, the largest increase in average R/VC ratios for the routes that we reviewed was for medium-distance shipments of plastics from the Houston, Texas, economic area to the Little Rock, Arkansas, economic area. On this route, the average R/VC ratio increased by about 64 percentage points—from an average of 154 percent (1990–1996) to an average of 218 percent (1997–2000). The R/VC ratio on this route peaked at 250 percent in 1999. The R/VC ratio on this route was generally increasing while the rail rate was generally decreasing, suggesting that both rates and variable costs were decreasing and that railroads did not pass on all cost reductions to customers in the form of rate reductions. In contrast, the largest decrease in average R/VC ratios for the routes we examined was about 116 percentage points, which occurred for motor vehicle shipments between the Chicago economic area and the Dallas economic area—from an average of 240 percent (1990–1996) to an average of 124 percent (1997– 2000). Over this latter period, rail rates on this route decreased from about 8.7 cents per ton-mile in 1997 to about 8 cents per ton-mile in 2000. This suggests that variable costs increased during this period. The R/VC ratios we observed are consistent with railroads’ ability to use differential pricing, and they are sensitive to competition. For example, over the 1997–2000 period and the 1990–1996 period, the R/VC ratio for medium-distance shipments of wheat from the Great Falls economic area (a northern plains location) exceeded those for wheat shipments from the Wichita, Oklahoma City, and Duluth economic areas for the specified destinations. (See fig. 8.) There are fewer potentially competitive alternatives to rail in the northern plains states. In contrast, shipments originating in the central plains states (for example, from Wichita and Oklahoma City) are considered by some to have more alternatives to rail than in the northern plains. Duluth (a northern plains origin) offers a competitive alternative of transportation by water. The anomaly appears to be medium-distance wheat shipments originating in the Grand Forks, North Dakota, economic area (a northern plains origin) transported to the St. Louis, Missouri, economic area. The R/VC ratio for this route, although consistently above the R/VC ratio for shipments from the Duluth economic area (with potential water competition), was generally below that of Wichita and Oklahoma City (with potentially more rail competition) from 1997 through 2000. This suggests that wheat shipments on this route may have been sensitive to barge competition from the Mississippi River or rail competition in the central plains states or the Midwest. The use of R/VC ratios has limitations. In particular, the ratios are subject to misinterpretation because they are simple divisions of revenues by variable costs. It is possible for rates paid by shippers to be dropping while the R/VC ratio is increasing—a seemingly contradictory result. For example, if revenues (which are the rates paid by shippers) are $2 and variable costs are $1, then the R/VC ratio is 200 percent. If costs decrease by 50 cents and railroads pass this cost decrease on to shippers by decreasing rates by 50 cents, the R/VC ratio becomes 300 percent. Therefore, by itself, the R/VC ratio could suggest that railroads are using their market power to make shippers worse off when this might not be the case. Board officials suggested that the R/VC ratio shown in figure 8 for the movement of wheat from the Great Falls economic area to the Portland economic area is one such instance of this. In this case, rail rates from Great Falls generally decreased over the 1997 through 2000 period from about 3.5 cents per ton-mile in 1997 to about 3.2 cents per ton-mile in 2000. Board officials said unit costs were also decreasing, in part, because of increases in shipment size and various carrier-specific productivity improvements related to the 1995 Burlington Northern Railroad merger with the Atchison Topeka & Santa Fe Railway Company. The R/VC ratio on this route increased from 240 percent in 1997 to 308 percent in 2000. Similarly, using the example above, if variable costs increase by 50 cents (from $1 to $1.50) and railroads increase their rates by the same amount (from $2 to $2.50), then the R/VC ratio becomes 167 percent. Again, the R/VC ratio alone would suggest that shippers are better off—because the R/VC ratio decreased from 200 percent—when this might not necessarily be the case. Although R/VC ratios have limitations, they can be useful indicators of railroad pricing and of whether railroads may be using their market power to set rates. As described previously, the R/VC ratio is a jurisdictional threshold for the Surface Transportation Board to consider rate relief cases. The board uses other analytical techniques to determine whether rates are reasonable. We provided a draft of this report to the Surface Transportation Board and the Department of Transportation for their review and comment. The board provided its comments in a meeting that included its general counsel and chief economist. In general, the board agreed with the material presented in our draft report and stated that it accurately portrayed rail rate trends over the period of our study. It said that the overall trend of declining rates that we found is consistent with studies and analyses prepared by the board. Board officials said that, while it can be difficult to identify with specificity the reasons why rail rates might change in the short run, especially rates for specific commodities over specific routes, the draft report did an admirable job in discussing factors that could influence rate changes. Among the specific comments made were (1) that low rail rates have allowed Western coal to penetrate Eastern coal markets and (2) that R/VC ratios for chemicals may not fully reflect the costs of increased liability exposure faced by railroads in transporting hazardous chemicals. We made changes to the report to reflect the board’s comments. The board offered additional clarifying, presentational, and technical comments that, with few exceptions, we incorporated into our report. The Department of Transportation, in oral comments made by the director, Office of Intermodal Planning and Economics, Federal Railroad Administration, said that the report fairly and accurately portrayed the changes in railroad freight rates over the study period, and that rail rates were responsive to market conditions and competition. The department suggested that our Results in Brief section should indicate that R/VC ratios cannot be relied upon as measures of railroad market power. We modified the Results in Brief to provide a fuller discussion of R/VC limitations. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 21 days after the date of this letter. At that time, we will send copies of this report to congressional committees with responsibilities for freight railroad competition issues; the administrator, Federal Railroad Administration; the chairman, Surface Transportation Board; and the director, Office of Management and Budget. We will also make copies available to others upon request. This report will also be available on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact either James Ratzenberger at ratzenbergerj@gao.gov or me at heckerj@gao.gov. Alternatively, we may be reached at (202) 512-2834. Key contributors to this report were Stephen Brown, Richard Jorgenson, and James Ratzenberger. As for our 1999 report, we used the board’s Carload Waybill Sample to identify railroad rates from 1997 through 2000 (the latest data available at the time of our review), which we then analyzed to determine rate changes. The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates for specific commodities in specific markets by shipment size and length of haul. According to board officials, revenues derived from the Carload Waybill Sample are not adjusted for such things as year-end rebates and refunds that may be provided by railroads to shippers who exceed certain volume commitments. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, the board disguises the revenues associated with these movements before making this information available to the public. Consistent with our statutory authority to obtain agency records, we obtained a version of the Carload Waybill Sample that did not disguise revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rate trends than analyses that may be based solely on publicly available information. Since much of the information contained in the Carload Waybill Sample is confidential, rail rates and other data contained in this report that were derived from this database have been aggregated at a level sufficient to protect this confidentiality. As in our 1999 report, we analyzed coal, grain (wheat and corn), chemicals (potassium and sodium compounds and plastic materials or synthetic fibers, resins, or rubber), and transportation equipment (finished motor vehicles and motor vehicle parts or accessories) shipments. These commodities represented about 52 percent of total industry revenue in 2000 and, in some cases, had a significant portion of their rail traffic transported on routes where the ratio of revenue to variable costs equaled or exceeded 180 percent. We used rate indexes and average rates on selected corridors to measure rate changes over time. A rate index attempts to measure price changes over time by holding constant the underlying collection of items that are consumed (in the context of this report, items shipped). This approach differs from comparing average rates in each year because, over time, higher- or lower-priced items can constitute different shares of the items consumed. Comparing average rates can confuse changes in prices with changes in the composition of the goods consumed. In the context of railroad transportation, rail rates and revenues per ton-mile are influenced, among other things, by average length of haul. Therefore, comparing average rates over time can be influenced by changes in the mix of long- and short-haul traffic. Our rate indexes attempted to control for the distance factor by defining the underlying traffic collection to be commodity flows occurring in 2000 between pairs of census regions. To examine the rate trends on specific traffic corridors, we first chose a level of geographic aggregation for corridor endpoints. For grain, chemical, and transportation equipment traffic, we defined endpoints to be regional economic areas defined by the Department of Commerce’s Bureau of Economic Analysis. For coal traffic, we used economic areas to define destinations and used coal supply regions—developed by the Bureau of Mines and used by the Department of Energy—to define origins. An economic area is a collection of counties in and about a metropolitan area (or other center of economic activity); there are 172 economic areas in the United States, and each of the 3,141 counties in the country is contained in an economic area. As in our 1999 report, we placed each corridor in one of three distance-related categories: 0–500 miles, 501–1,000 miles, and more than 1,000 miles. Although these distance categories are somewhat arbitrary, they represent reasonable proxies for short-, medium-, and long- distance shipments by rail. To address issues related to revenue-to-variable cost ratios we obtained data from the board identifying revenues, variable costs, and R/VC ratios for commodities shipped by rail at the two-digit Standard Transportation Commodity Code level. We used data from the Carload Waybill Sample to identify the specific revenues and variable costs and to compute R/VC ratios for the commodities and markets we examined. Using this information we then identified those commodities and markets whose R/VC ratios were consistently above or below the 180 percent R/VC level. We performed our work from December 2001 through May 2002, in accordance with generally accepted government auditing standards. The following are real (inflation-adjusted) rail rates for coal shipments in the various markets and distance categories we reviewed. The distance categories are as follows: short is 0 to 500 miles, medium is 501 to 1,000 miles, and long is greater than 1,000 miles. The following are real (inflation-adjusted) rail rates for wheat and corn shipments in the various markets and distance categories we reviewed. The distance categories are as follows: short is 0 to 500 miles, medium is 501 to 1,000 miles, and long is greater than 1,000 miles. The following are real (inflation-adjusted) rail rates for selected chemical and transportation equipment shipments in the various markets and distance categories we reviewed. The distance categories are as follows: short is 0 to 500 miles, medium is 501 to 1,000 miles, and long is greater than 1,000 miles. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980 gave freight railroads increased freedom to price their services according to market conditions. A number of shippers are concerned that freight railroads have used these pricing freedoms to unreasonably exercise their market power in setting rates for shippers with fewer alternatives to rail transportation. This report updates the rate information in GAO's 1999 report (RCED-99-93) using selected commodities and with effective competitive transportation alternatives. From 1997 through 2000, rail rates generally decreased, both nationwide and for many of the specific commodities and markets that GAO examined. However, rail rates for some commodities and distance categories--such as wheat moving long distances and coal moving short distances--have stayed about the same or increased. In other instances, such as wheat moving medium distances, rail rates stayed about the same or decreased. Overall, the proportion of rail shipments above the Surface Transportation Board's statutory jurisdictional threshold for considering rate relief actions--where railroad revenues for the shipment exceed 180 percent of variable costs--stayed relatively constant at 30 percent from 1997 through 2000. However, the proportion of shipments for which revenues exceeded variable costs by 180 percent varied, depending on commodity and markets.
There are no specific government-wide training standards or training requirements for the federal grant workforce; however, the administration has taken some steps toward developing such standards. Between 2004 and 2011, a work group within the administration’s Grants Policy Committee (GPC) studied the issue of training and certifying the grant workforce, which led to a number of recommendations. However, the work group’s recommendations were never formally adopted by the GPC. In 2011, OMB formed the Council on Financial Assistance Reform (COFAR), an interagency group of executive branch officials, to replace the GPC and another federal board—the Grants Executive Board—with the stated aim of creating a more streamlined and accountable structure to coordinate financial assistance, including grants. In November 2012, just over a year after it was created, COFAR identified the need to develop a qualified and professional workforce as one of five priorities to guide its work on grants management reform in the coming years. To accomplish this, COFAR set five goals: Establish core competencies for grants managers by September 2013. Develop a baseline body of knowledge as a shared resource by September 2013. Establish a government-wide resource repository for federal grants professionals by September 2013. Provide training for core competencies by September 2014. Establish certification standards by September 2015. In developing proposals for grants training, COFAR considered the process and requirements that are in place for training the federal acquisitions workforce. Federal agencies generally do not need explicit statutory authority to make particular acquisitions of goods or services. However, for the procurement of goods and services, federal executive agencies are governed by a set of requirements specified in the Federal Acquisition Regulation. Congress has recognized the need for a professional acquisitions workforce by requiring executive agencies to establish management policies and procedures for the effective management of this workforce—including education, training, and career development. Several governmental organizations play critical roles in assisting agencies to build and train their acquisitions workforce. These organizations include OMB’s Office of Federal Procurement Policy, which provides government-wide guidance on managing the acquisitions workforce, and the Federal Acquisition Institute, which promotes the development of the civilian acquisitions workforce. In contrast to acquisitions, federal agencies do not have inherent authority to enter into grant agreements without affirmative authorizing legislation. Consequently, each federal grant program has legislation that identifies the types of activities that it can fund and the purposes it is to accomplish. Frequently, the authorizing legislation that establishes a particular grant program will define the program objectives and leave the administering agency to fill in the details through regulation. In addition to specific statutory requirements, OMB issues administrative requirements in the form of circulars. These circulars provide government-wide guidance that sets standards for a range of grants management activities, including grant application forms, cost principles, audits, and financial reporting. However, training is not one of the issues for which specific government- wide guidance is provided. While many differences exist among grant programs, a basic distinction can be made between two broad grants management roles. The first role is grants management specialist—employees who generally possess detailed knowledge of OMB grants administration requirements, specific agency policies and procedures for grants management, and relevant laws and regulations to help ensure accountability of the grant funds. Grants management specialists are responsible for performing many of the daily grants management tasks from the pre-award phase through closeout. The second role is program specialist—employees who possess expert knowledge in the specific area necessary to meet a grant’s goals. Reflecting the wide variety of federal programs that grants support, individuals performing this role typically possess a specialized program or subject-matter expertise. These specialists can be involved in several stages of the grant process including announcing the terms and conditions of a grant, recommending potential grantees, and monitoring grantees’ progress in achieving the grant’s goals. Although these categories describe separate focus areas and many employees involved with grants concentrate on one or the other, the categories are not mutually exclusive. A single employee can carry out the functions typically associated with both a grants management specialist and a program specialist. Despite the existence of a “grants management specialist” job series, identifying the federal grant workforce presents challenges due to differences in how agencies manage grants and the range of employees involved in grants management. We have previously reported that organizations need to determine the workforce skills necessary to achieve their strategic goals as part of an effective training and development program. However, identifying these skills, as well as the roles, responsibilities, and training needs of individuals in the federal grant workforce at a government-wide level remains a challenge. This is due to the different ways agencies assign grants management responsibilities to grants management specialists and program specialists and the wide range of job series the agencies use. Prior to 2010, no specific job classification existed for the many federal employees responsible for carrying out managerial and administrative tasks related to grants, including ensuring compliance with OMB and agency policies and procedures. In the absence of a specific job classification, officials at selected agencies told us that they classified these employees under a variety of other job series that did not focus on grants, such as general, administrative, and subject-matter job titles. OPM recognized the need for a classification that would more accurately capture the work of federal employees who manage grants, and in November 2010 the agency established a “grants management specialist” (series 1109). Since then, the extent to which agencies have adopted the 1109 job series has varied greatly. As of December 2012, there were 1,414 federal employees in the 1109 series in the 22 grant-making agencies we included in our survey. More than half of these employees worked at HHS. In contrast, 5 of these agencies do not have any series 1109 employees and an additional 7 have fewer than 20 (see figure 1). The way that an agency manages grants may explain whether, and to what extent, it has adopted the grants management specialist job series 1109. In the four agencies we selected for further review, we saw differences in the extent to which they adopted the 1109 series, partly because of different grants management approaches. At HHS, where grants management specialists work alongside program specialists, officials told us that they embraced the 1109 job series. Shortly after OPM introduced the series, HHS began moving hundreds of employees, who were classified in several general or miscellaneous administrative series and whose primary work responsibilities focused on carrying out managerial and administrative tasks, to the new 1109 series. In contrast, both Education and DOT—where program specialists primarily manage grants—largely chose not to use the new series. Officials at these agencies told us that this decision resulted from their grants management approach where program specialists, who have expert subject-matter knowledge in the grant program area, also carry out the functions of grants management specialists. These officials also had concerns about transitioning program specialists such as engineers or educational specialists, who have specific educational requirements, to the 1109 series which has no such requirements. In addition to the 1109 grants management specialist series, the federal grant workforce includes a wide range of employees in other job series, many of whom function as program specialists. These employees generally possess specialized education and expertise in an area directly relevant to the grant’s subject area. For example, at DOT, the employees responsible for managing federal airport safety and improvement grants are typically civil engineers (series 810), who are familiar with airport runway construction standards. Similarly, HHS employs social scientists (series 101) with expertise in child and family services as well as doctors and nurses (series 602 and 610, respectively) to manage and support grants that fund health care and related services to needy families. Some agencies, like DOT, typically rely upon program specialists to carry out both the grants management and program specialist functions, while others, such as HHS, employ individuals whose primary responsibility is to administer daily grants management tasks while working alongside program specialists. With such a diverse range of job series across federal grant-making agencies and given the considerable variation in how agencies assign individuals to grants management roles, it is difficult to identify all the staff responsible for managing federal grants other than on an agency-by- agency basis. We asked the four agencies we focused on in this review— Education, HHS, State, and DOT—to identify the number of employees they considered to be in their grant workforce and provide us with breakdowns by job series. They identified over 5,100 employees, spanning more than 50 different occupational job series. These include grants management specialists, program specialists (e.g., public health program specialists, education program specialists, civil engineers, transportation specialists, and Foreign Service officers), auditors, and financial managers, among others. See figure 2 for a visual representation of the relative frequency of more than 90 percent of these occupational job series. For a complete listing of these job series as well as the total number of employees in each, as identified by Education, HHS, State, and DOT, see appendix III. With the exception of the grants management specialists in the 1109 series, grant officials at each of these four agencies told us that only a subset of those in the 57 job series they identified as part of the grant workforce are actually assigned to work on grants. The program specialists, accountants, contracting officers, and other individuals identified as part of the grant workforce generally comprised a relatively small subset of the total number of agency employees in those associated job series. For instance, while HHS employed over 6,800 doctors as medical officers (series 602) as of December 2012, officials identified only 29 of these as part of the grant workforce. Moreover, for some of the employees identified as being in the grant workforce, working on grants-related activities may only be part of what they do. On a government-wide level, OMB staff told us that they understand that the 1109 series does not encompass the entire federal grant workforce. In a town hall meeting held in December 2012 with grant-making agencies to discuss COFAR priorities, COFAR officials recognized that government-wide efforts to develop training and certification standards will need to consider how to reach members of the grant workforce who are not in the 1109 series. To date, COFAR has not made public specific plans on how it will define the grant workforce, nor which members of the grant workforce—such as grants management specialists and program specialists—it intends to include when developing government-wide training standards. We previously reported on the value of identifying competencies that are critical to successfully achieving agency missions and goals, such as those related to grant making, and assessing gaps in those competencies among the workforce. Competencies describe measurable patterns of knowledge, skills, abilities, behaviors, and other characteristics that an individual needs to perform work roles or occupational functions successfully. Understood another way, competencies specify what the person needs to do the job successfully. While there are other ways to identify training needs, such as through individual development plans or needs assessments, grants management competency models can be used to establish an overall framework to guide agencies’ training efforts as well as to provide a basis for agencies to identify critical skills gaps in the workforce and training to address these gaps. There have been previous efforts to develop government-wide grants management competencies, and another is planned by COFAR for 2013. These efforts date back to 2004 when the Chief Financial Officers Council’s now defunct GPC formed a work group on training and certifying the grant workforce which, according to officials in this group, developed an initial draft set of competencies. However, after holding discussions with OPM, the work group decided not to pursue this effort because OPM decided to take on the task. Subsequently, in 2009, OPM released a government-wide grants management competency model. According to OPM officials who were involved in the effort, the model describes functional competencies that apply across multiple job series or occupations, similar to OPM’s crosscutting leadership competencies. In addition to numerous general competencies, the model identified five technical competencies specific to grants management (see table 1). See appendix IV for the full set of OPM’s grants management competencies. OPM’s model has served as a framework to support a government-wide effort to assess critical skills gaps among the federal workforce. Through that effort, the administration identified seven mission critical competencies, and three were related to grants management. OPM officials told us that they are consulting with OMB staff and COFAR officials to determine a strategy for assessing skills gaps in these competencies. Following the release of OPM’s grants management competencies in 2009 not all grant-making agencies have embraced them. In response to our survey, half of agency CLOs (11 of 22) reported using the OPM competencies for purposes of identifying or developing grants training. The agency experiences described below may provide some insight into why some agencies have not used OPM’s competencies and also raise questions about the sufficiency of OPM’s competencies in meeting the needs of program specialists. At HHS, several grant-making operating divisions told us that while they found OPM’s competency model useful for grants management specialists, they also needed a separate competency model for staff who function as program specialists working with their grants management specialists to manage grants. For example, officials in HHS’s Health Resources and Services Administration (HRSA) noted that OPM’s competencies were not appropriate for program specialists, so they developed a set of program specialist competencies. While these incorporate elements similar to OPM’s technical competencies, they also include others that are more directly related to the employee’s specific role in the HRSA grant program. HRSA highlights the role and authority of program specialists in the agency as distinct from that of the grants management specialist (series 1109 employees). In addition to competencies specifically related to grants management, HRSA also includes competencies related to policy analysis, research and evaluation, and technical assistance. For example, under the “policy analysis” competency, program specialists must be able to apply public health knowledge to HRSA’s programs, perform policy analysis, and use this information to develop recommendations for improving HRSA programs or grantee performance. Under the “technical assistance” competency, program specialists need to be able to plan and conduct training and other activities that help grant applicants and grantees understand and comply with grant objectives and policies. In June 2013, the Centers for Disease Control and Prevention (CDC) finalized a set of competencies for program specialists which, like HRSA’s program specialist competencies, includes competencies directly related to the employee’s specific role in CDC grant programs. At Education, an official told us that OPM’s single overarching grants management competency model did not fit the agency’s culture or the way it managed grants. Unlike HHS, Education employs program specialists to manage grants through the entire process. Of the six job series under which the grant workforce is classified, five are mission critical occupations and the agency incorporates grants management competencies into the competency models for these occupations. For example, competencies for management and program analysts (series 343) include elements similar to the five OPM technical competencies. Education also includes competencies demonstrating leadership in the field and programmatic understanding and knowledge as well as technical assistance for both management and program analysts and education research analysts/education program specialists (series 1730/1720). An Education official explained that the agency’s culture was resistant to a single, one-size-fits-all approach, and that staff were concerned that such a model would not adequately account for the technical and programmatic expertise needed to achieve program goals. Education used these grants management competencies as part of an agency-wide competency gap assessment, and is using the results to develop training to close these gaps. A recent effort to develop government-wide grants management competencies provides further confirmation of the importance of developing competencies based on how agencies manage grants. In June 2012, senior executives from across the federal government gathered at the first meeting of the group Leading Executives Driving Government Excellence (Leading EDGE) to examine a variety of government-wide management issues and one issue the group selected to address was grants training. In November of that year, the group issued a report recommending a set of grants management competencies. The Leading EDGE report recognized that federal agencies target and structure their grants training depending on how they manage grants and that this varied across agencies. Further, the report acknowledged that some agencies assign grants management specialists to work in conjunction with program specialists while others assign an individual to perform both of these roles, and that grants training competencies needed to account for this variation. Despite its recognition of the importance of considering the different ways that agencies manage grants when formulating a grants management competency model, the competency model the Leading EDGE report proposes is not significantly different from the model developed by OPM in 2009. In fact, the Leading EDGE model consists of the same competencies, defined in the same way, as those in OPM’s model. The only change to the competencies is how Leading EDGE categorizes two competencies—“planning and evaluation” and “project management.” In the OPM model these are listed as “general” competencies, whereas under the Leading Edge proposal, they become “technical” competencies. The Leading EDGE report does not specifically explain how making such a change would provide a meaningful alternative to the status quo that led agencies like HRSA and Education to explore other approaches. This is particularly true in light of the fact that, according to OPM officials, it is not uncommon for agencies to reclassify competencies as appropriate for their needs under existing competency models. So, while the Leading EDGE report represents a useful step forward in its recognition of the need to more fully integrate considerations related to program specialists when thinking about grants management competencies, its proposal, much like the current OPM model itself, is not likely to sufficiently address the need for competencies that are more responsive to the program specialist role. We believe that these agency experiences, and that of Leading EDGE, provide important information for COFAR’s consideration when developing government-wide grants management competencies. According to planning documents, COFAR expects to produce government-wide grants management competencies by September 2013. Toward this end, OMB staff told us that they have held three town hall meetings with agency officials between August 2012 and April 2013 in order to review COFAR’s priorities, such as developing grants management competencies, and to obtain feedback from all federal grant-making agencies on what COFAR should consider as it moves forward on its priorities. Also, they engaged other stakeholders such as professional grants associations at meetings, conferences, and on conference calls. In addition, OMB staff said that COFAR plans to build on Leading EDGE’s work as it develops new grants management competencies for the federal workforce. We have previously reported that agencies need to provide their workforce with training and development that is aligned with their missions, goals, and cultures. We found that agencies provide formal and informal grants training through classroom instruction, online webinars, and meetings or briefings. The source of this training was generally a combination of direct agency provision and external vendors. According to our agency CLO survey, 7 of 22 agency CLOs responded that all or most of their grants training was provided by their agency. In addition, 9 of the 22 agency CLOs said they receive all or most of their grants training from commercial or non-profit sources, and the most common commercial provider by far was a single vendor. This vendor offers a set of core grants management courses designed to provide general knowledge that can be applied across the federal grants environment. Among these is an introductory course on grants as well as courses on cost principles, administrative requirements, and grants monitoring. Agencies also reported difficulties finding grants training that met all of the needs of their grant workforce. Specifically, 13 of 22 agency CLOs responding to our survey said that they found the availability of training that met agency needs was at least a moderate challenge in training the grant workforce. For example, an agency said it was challenging to find courses that include instruction on specific statutes that authorize grant programs. In addition, in our interviews at selected agencies, officials told us that while external training courses were often useful in providing staff with general grants management knowledge, it was challenging to find courses that addressed agency or program-specific policies and procedures. Of the four agencies we selected for a more detailed review, some responded to this challenge by customizing or supplementing foundational grants training courses provided by a commercial vendor or by developing training mechanisms, other than formal courses, for distributing agency-specific guidance to employees. For example, Education worked with a commercial vendor to tailor the vendor’s grants training courses to reflect Education’s policies and procedures and include examples and exercises specific to the agency. Although these courses were taught by the vendor’s instructors, Education provided subject-matter experts to guide the customization of the courses to include agency policies and procedures. According to Education officials, the decision to tailor these external courses was based on participant feedback as well as their understanding of how adults process information while learning. The agency offered the courses to one cohort of employees in fiscal year 2011, to another cohort in fiscal year 2012, and to two cohorts in fiscal year 2013. Each cohort consisted of 35 employees. In contrast, at HHS, officials chose to supplement commercially available grants training courses by developing their own grants training courses. For example, in addition to offering program specialists a core of vendor- provided courses, during the course of this review HRSA began requiring these employees to take three courses that the operating division specifically developed for its own grant policies and procedures. HRSA officials told us that while they found the commercial courses offered beneficial foundational knowledge on grants management, the courses did not fully address how the grants process operated at HRSA. For example, although the commercial courses cover administrative requirements for grants management as set by OMB, a goal of the HRSA program specialist training is to help program specialists understand how HHS policy relates to OMB guidance. The program specialist course manual uses practice scenarios with specific HRSA examples to present the reasoning behind when and which policies should supersede others. HRSA officials told us that the program specialist training was developed, in part, with the goal of improving customer service to grantees by ensuring that staff receive specific training on how to manage HRSA grants. Each of the agencies that we selected for a detailed review also used other approaches to impart agency-specific guidance on grant policies and procedures to the grant workforce. For example, State used a number of training mechanisms in addition to formal courses to train its grant workforce both domestically and overseas. These included informal training sessions and a variety of online information sharing. Informal training included roundtable discussions facilitated quarterly by State’s central grant policy office. These discussions allowed employees to share knowledge and perspectives with their peers on topics such as monitoring grant programs through site visits. State has several methods for sharing information, including an online platform where employees can find the latest information on policy guidance and regulations, grant reports, and grant systems as well as online repositories for frequently asked questions on agency grants policy and financial management of grants. State officials told us that because OMB administrative guidance generally does not apply to the overseas grant environment, the agency has developed training for the grant workforce on agency-specific policies and procedures. Using the training mechanisms described above, among other training methods, agency officials said they were able to provide employees with agency-specific training as issues arose and, through the use of online information sharing, easily reach those in the overseas bureaus. On a government-wide level, COFAR plans to provide training on grants management core competencies by September 2014. Toward that end, OMB staff told us that COFAR has solicited feedback in town hall meetings with grant-making agencies, explored potential options for training venues, and is having HHS take the lead for COFAR in this effort. Certification programs are designed to ensure that individuals attain the knowledge and skills required to perform in a particular occupation or role by establishing consistent standards. Requirements for certification often include some amount of professional experience, continuing education, and successful completion of an assessment process that measures an individual’s mastery of knowledge and skills against a set of standards. Internal control standards for agencies state that management should identify the appropriate knowledge and skills, provide training, and establish a mechanism to help ensure that all employees receive appropriate training as part of establishing a control environment. One internal control mechanism agencies have used is to develop certification programs to help ensure that staff demonstrate a level of competence in a particular field critical to the agencies’ missions and goals. Although not required government-wide, many agencies require or recommend grants management certifications in order to help ensure sound management practices across the agencies’ grant workforce. Twenty-three percent (5 of 22) of agency CLOs responding to our survey reported that their agency requires some type of grants management certification and 73 percent (16 of 22) either require or recommend certification. The 16 agencies that either require or recommend grants management certification most often chose the following as one of the top three reasons why they do so: to ensure compliance with grants management regulations (15 of 16 agencies); to ensure accountability of grant funds (12 of 16 agencies); and to ensure uniform knowledge (11 of 16 agencies). The experiences of agencies that have established certification programs suggest some broader lessons for establishing grants management certification standards government-wide that apply to the entire grant workforce. Two of the four agencies we selected for a more detailed review—State and HHS—have developed certification programs. State has implemented certification programs for all grants officers and, as of January 2013, for all grants officer representatives. To earn the grants officer certification employees must complete grants training, possess specific levels of education or experience, and comply with a continuing education requirement. To earn the grants officer representative certification employees must complete specific grants training and comply with a continuing education requirement. HHS developed an agency- wide certification program for grants management specialists, but only some of its operating divisions have implemented it. The certification program has four levels for grants management specialists, each of which requires staff to either complete specific training or to obtain an increasing amount of on-the-job experience. Several operating divisions also require that program specialists take some grants training, and one operating division is proposing a separate required certification program for program specialists. One lesson that agency experiences suggest is the importance of clearly defining the roles and responsibilities of a diverse grant workforce and then appropriately tailoring certification standards in order to ensure that all key personnel in the grant workforce have appropriate and consistent knowledge to fulfill their roles. For example, State identified two key roles in its grants management process—grants officers and grants officer representatives—and tailored separate certifications for these roles. Certified grants officers (e.g., grants management specialists) issue federal grant awards up to a specified value depending on the level of certification. As the value of grant awards increases, the amount of training required for the certification also increases. All levels of grants officer certification require completion of a certain number of training hours every 3 years in order to renew the certification. Grants officer representatives are program specialists who oversee the programmatic and technical aspects of the grant. They are appointed by grants officers and must complete two grants management courses before obtaining the grants officer representative certification. All grants officer representatives must update their training with additional grants training every 3 years. In 2007, we reported that State had inconsistent training and skills requirements for the grant workforce, and that State officials were concerned about the agency’s ability to effectively manage its grants and other foreign assistance. Agency officials told us that the grants officer representative certification was implemented in January 2013, to address inconsistencies in training for grants officer representatives across the agency. In another example, HHS’s guidance on its agency-wide certification program recognizes the important role that program specialists play in the technical, scientific, and programmatic aspects of grants. The guidance states that since program specialists work in partnership with grants management specialists, they should complete an orientation course in grants administration that emphasizes the program specialist’s role in the grants management process. Completion of this orientation course in conjunction with joint nomination by the supervising program and grant officials results in certification as a program specialist. The guidance encourages program specialists to take further grants training, but does not require them to complete the grants management certification. Another lesson illustrated by agency experiences in this area is the importance of ongoing leadership commitment and having a mechanism in place to track the implementation of training requirements. HHS developed an agency-wide professional certification program for the grant workforce in 1995, but the program was not fully implemented even though operating divisions were required to do so by guidance and an agency policy directive. We found that a limited number of grant-making operating divisions (4 of the 13) had taken action to implement the required certification program. Grant officials in HHS’s operating divisions told us that the program was inconsistently implemented because of changes in agency-level leadership and lack of enforcement. Other officials said that without enforcement of the certification requirement or updated agency-wide training guidance for grants management it was a challenge to obtain the resources needed and identify courses for training the grant workforce. Another potential factor contributing to uneven implementation was that HHS did not track most staff completion of the grants management certifications agency-wide, although operating divisions tracked staff completion where the requirement was enforced. Agency officials told us that they did track certification completion for those meeting the requirements of the highest certification level reserved for chief grants management officers, a position typically filled by members of the Senior Executive Service. According to these officials, the operating divisions at HHS nominate staff to receive the chief grants management officer certification and the agency-level grants policy division authorizes it. In contrast, State tracked whether staff completed requirements for its grants certifications in a database maintained by the Office of the Procurement Executive. Officials told us the system’s visibility is a useful monitoring tool that encourages staff to complete the certification program in a timely manner. We believe that the lessons from these two agencies that have developed certification programs provide important information for COFAR’s consideration. On a government-wide level, COFAR has identified the need to ensure a consistent knowledge base and professionalize the grant workforce through developing certification standards by September 2015. OMB staff told us that they are looking into a multi-agency effort to develop a grants management certification that is being led by the Departments of Commerce and Energy. OMB staff also said they have had initial discussions on certification standards with professional grants associations. Given the importance of grants as a tool to achieve federal objectives and the large outlays the federal government makes to fund them each year, it is critical that the people who manage these grants—the federal grant workforce—be well-trained to handle their responsibilities. A key first step in this process is to have a clear understanding of the different ways agencies manage their grant processes and the wide variety of employees involved. A robust understanding of the diverse types of employees that make up the grant workforce and the key roles they carry out provides a foundation for the development of an effective government-wide grants training strategy. In November 2012, COFAR identified the federal government’s need to develop a qualified and professional grant workforce as one of its top priorities in the coming years. Toward that end, it announced a series of goals including the creation of government-wide grants competencies, training, and certification standards. As COFAR moves forward with plans to develop such standards, its efforts would benefit from considering the experiences of federal agencies in areas such as the development of grants management competencies and certification programs. Several agencies we reviewed found competencies to be a useful tool for structuring and identifying training needs. However, to fully support the agencies’ grant-making missions and goals, the competencies need to address two key roles in the grant workforce: grants management specialists and program specialists. Although OPM’s competency model appears to largely address the former, the experiences of agencies we reviewed suggest that the model was not directly applicable for program specialists involved in the grants process. Recognizing this gap, agencies took a variety of approaches including developing their own competency models for program specialists or finding ways to incorporate specific grants management competencies into existing competency models. Based on these agency experiences, future government-wide competencies that do not address the program specialist role will likely not be fully adopted by grant-making agencies. Similar to agency experiences with competencies, some agencies have developed certification programs that have unique requirements depending on the role of the individual as either a grants management specialist or a program specialist. At one agency, for example, by developing a certification program that specifies a standard training curriculum and other requirements for both key roles in the grant workforce, the agency helped to ensure that key members of its grant workforce have appropriate levels of proficiency. Future government-wide certification standards that do not distinguish between the grants management specialist role and the program specialist role will likely not be fully adopted for the entire grant workforce by all grant-making agencies. As COFAR crafts government-wide standards for grants training, agency experiences highlight the importance of defining the roles of the grant workforce. As COFAR works to professionalize the federal grant workforce, we recommend that the Director of OMB, in collaboration with COFAR, take the following two actions: 1. Include the program specialist role as COFAR develops a government-wide grants management competency model. This could be done by developing a separate model for program specialists or revising the existing grants management model so that it incorporates additional competencies for program specialists. 2. Distinguish between the grants management specialist role and the program specialist role as COFAR establishes government-wide certification standards for the federal grant workforce. We provided a draft of this report to the Secretaries of the Departments of Education, Health and Human Services, State, and Transportation; and to the Directors of the Office of Management and Budget and the Office of Personnel Management. OMB staff provided oral comments and concurred with our findings and recommendations. OMB staff, Education, HHS, and DOT also provided technical comments on the draft that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of the Departments of Education, Health and Human Services, State, and Transportation, and the Directors of the Office of Management and Budget and the Office of Personnel Management. In addition, the report will be available on our web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6806 or by email at czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In order to better understand how the federal workforce responsible for managing grants is trained, and what challenges and good practices exist, we (1) described the federal grant workforce at selected agencies and analyzed the challenge of identifying the workforce government-wide; and (2) examined selected good practices agencies used and challenges, if any, they faced in training the grant workforce, as well as potential implications of these practices and challenges for government-wide efforts to develop grants training standards. To obtain a broad view of how the grant workforce is trained government- wide, we reviewed relevant federal laws and administrative guidance and conducted a survey of chief learning officers (CLO) at the 24 Chief Financial Officers (CFO) Act agencies. The survey questionnaire included questions on grants-specific training, certification, administration and development of training, and challenges and good practices in training the grant workforce. Two agencies, the General Services Administration and the Office of Personnel Management (OPM), were dropped from our survey after officials in those agencies stated that they did not have any personnel actively working on grants. Each of the remaining 22 agencies responded to the survey, resulting in a 100 percent response rate from CFO Act agencies with an active grant workforce. The questionnaires were sent to the CLOs but they were asked to share them, as needed, with other knowledgeable agency staff in order to provide an informed response. Agencies were asked to return a single questionnaire that best represented their practices and challenges as a whole. Four agencies returned multiple questionnaires from sub-agency components. To address this, we developed a methodology for combining responses in order to formulate an agency-wide response. We also conducted follow- up conversations with agencies to clarify responses or collect additional details. To obtain illustrative examples of experiences, challenges, and good practices in training the grant workforce, we conducted additional reviews at four agencies. We selected these agencies from among the 22 CFO Act agencies with an active grant workforce after considering the agencies’ total grant obligations, number of grant programs, and number of discrete grants awarded (using fiscal year 2011 USASpending.gov data). In addition, in making our selections we considered whether an agency had grants training or certification requirements, any good grants training practices as identified by its CLO, the existence of significant issues regarding grants or training as reported in our prior work, the number of grant-making operating divisions, the variety of grant types (formula, block, and project), major grant issue areas (such as health, education, and transportation) and functional areas (such as service delivery, construction, and research). The scope of this review was limited to federal grants and did not include cooperative agreements or other types of federal assistance. We selected agencies that administered a large majority of federal grant obligations and also collectively represented a range of positions on these other criteria. The four agencies we selected—the Departments of Education (Education), Health and Human Services (HHS), State (State), and Transportation (DOT)— account for over 84 percent of the federal government’s total grant dollars obligated in fiscal year 2012, manage all three major grant types, and cover a broad range of grant issue areas. At each of these agencies, we reviewed documents including training materials and grants policy directives and interviewed training officials. We interviewed agency-level grants and training officials at each agency, and at the agencies we selected where grants training is generally managed at the sub-agency level—HHS, Education, and DOT—we also interviewed officials at selected sub-agencies. These sub-agency components were selected using criteria similar to that detailed above for agency selection. We also collected information on training courses, their costs, and the grant workforce at both the agency-level as well as all grant-making sub- agency components. Since the focus of our work was to better understand and provide information on how the federal government trains its grant workforce, we conducted our reviews at each selected agency to identify examples of grants training experiences, challenges, and good practices, and not to audit their grants training programs. Accordingly, we did not seek to make an evaluative judgment of the specific grants training programs at the four selected agencies nor assess the prevalence of the practices or challenges we cite either within or across the four agencies. Agencies and sub-agency components other than those cited for a particular practice may or may not also be engaged in the same practices. While the responses and comments provided on our survey reflect the views of the CLOs at 22 of the CFO Act agencies, the examples of good training practices and challenges we identified are from the four selected agencies and are therefore not generalizable to other agencies. To examine good practices that agencies use in training the grant workforce, we used criteria from our work on strategic training and workforce planning in the federal government, internal control standards for training, and OPM guidance. To describe the federal grant workforce and the challenge of identifying the workforce government-wide, we collected workforce data from the four case example agencies, reviewed data on the grants management specialist job series 1109, and interviewed officials at OPM. We requested data from our four case example agencies on the makeup of their grant workforce. To identify their grant workforce, cognizant grant officials at the agency or sub-agency component level generally made judgments regarding which employees to include and then used agency personnel databases to access job series data associated with those identified. To assess the reliability of these data, we examined the workforce data provided by the agencies for anomalies. This included obtaining data on the total number of federal employees in the 1109 grants management specialist series for each of the four selected agencies from FedScope. We then compared the numbers of 1109 series employees reported by the agencies to the data in FedScope in order to confirm the data collected from the agencies for the 1109 series. Only minor differences in the total number of series 1109 employees were noted and this was expected due to the 4-month difference in the time frames covered by the data. We also interviewed officials at OPM to discuss the development of the 1109 job series for grants management specialists and challenges associated with grant workforce identification. Based on our tests and discussions with agency officials, we determined that the data were sufficiently reliable for the purposes of this report. To obtain broad views on how the grant workforce is trained government- wide, we interviewed officials at the Office of Management and Budget and OPM. To understand the efforts of the Council on Financial Assistance Reform we reviewed documents produced by the Council and interviewed staff at OMB. We conducted this performance audit from May 2012 through June 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Education (Education) is the third largest federal grant-making agency in total value of funding, administering approximately $44 billion in grants in fiscal year 2012. The agency administers formula, block, and project grants through eight program offices (see table 2). These grants support the nation’s schools and students through programs such as Title I, Impact Aid, and Special Education. Grants training is managed at both the agency- and program-office level. Program offices within Education have the option of setting training requirements for their grant workforce and grants training is required in three of the agency’s eight grant-making program offices. Training requirements generally consist of several courses in grants management. On an agency-wide level, only one course is required; this course focuses on discretionary grant license holders. In addition to grants training courses, other related training opportunities offered by Education include informal brown bag sessions, mentoring, and on-the-job training. As of December 2012, Education officials estimated their total grant workforce to be approximately 761 employees, out of a total of more than 4,200 employees. The agency has no employees classified in the grants management specialist series because it uses program specialists to manage the entire grant process and because of essential job requirements of the six job series in which the program specialists are currently classified (see appendix III for a complete breakdown of the Education grant workforce by job series). According to an Education official and agency documentation, the agency incorporates grants management competencies in the competency models for at least five of six job series which comprise the grant workforce. For example, technical competencies for management and program analysts (series 343) include proficiency in formula and discretionary grants and grant monitoring, as well as demonstrating programmatic understanding and knowledge. Competencies are also used to identify training needs through a competency assessment for mission critical occupations; five of these occupations were positions that Education identified as being part of the grant workforce. For example, Education’s fiscal year 2011 assessment determined that proficiency gaps existed in fiscal monitoring; policy, legislation, and administrative rulemaking; and audit-related competencies. The agency developed a strategy to close those gaps through formal classroom training as well as web-based training and informal seminars on the topics identified. A grants training official told us that several of these courses are available and more are being developed to be offered in fiscal year 2013. In addition to the agency-wide competency assessment, one program office, the Office of Elementary and Secondary Education, conducted its own learning needs analysis through a contractor in 2012. This analysis assessed the learning needs for two positions, one of which was a grants specialist. The assessment was conducted through a survey of non- supervisory staff within the program office, who were asked to indicate the importance of tasks or content knowledge to their day-to-day work and the type of professional development desired in the next year. As a result of this learning needs assessment, a program office official reported that data analysis courses were procured to support professional development among staff members. This official also told us that while the Office of Elementary and Secondary Education plans to contract for a learning needs assessment every 3 years, they also have developed a tool for managers and staff to use annually in between external assessments in order to help grants specialists prepare their individual development plans. The agency requires one grants training course agency-wide; this annual one-hour briefing is intended for those employees holding a discretionary grant license, which allows them to obligate and award grant funds. According to an Education official, the annual briefing is designed to provide license holders an update on legislative and regulatory developments that impact the agency’s grants administration policies. The briefing also offers license holders an opportunity to share best practices, common concerns, and recommendations on how to improve the grants process. In another agency-wide effort, according to agency officials, Education worked with a commercial vendor in 2010 to develop and customize a curriculum of eight grants management courses that lead to a grants management training certificate. Topics covered by the courses include grants monitoring, cost principles, administrative requirements, and accountability. These courses are offered to cohorts of 35 employees and each of the agency’s program offices receives a certain number of slots in each cohort. Given criteria from the central learning and development office, it is the responsibility of the program offices to determine which of their employees are substantially involved in grants administration and management and would most benefit from the grants management curriculum. By offering the eight courses to cohorts of employees, Education officials told us that they saved over $64,000 per cohort, or 31 percent of the cost to provide the courses to 35 employees individually in fiscal year 2013. As reported by officials in each grant-making program office, 3 of 8 program offices at Education require grants training and 7 of 8 do offer grants training courses or other training opportunities to the grant workforce. For example, the Office of Innovation and Improvement offers an on-boarding program for new staff that includes courses on grants management and grants monitoring. In addition to these courses, this office holds internal brown bag sessions and a speaker series to address grant issues. According to agency officials, Education does not require a grants management certification. Although it does offer training that leads to a grants management certificate, it does not offer a grants management certification program for its grant workforce. Officials in Education’s program offices, as well as officials in the centralized offices that offer training, told us that to evaluate Education’s grants training courses offered in fiscal years 2011 and 2012 they collected information on participant satisfaction for a majority of the courses offered. Many officials did not respond or did not know if they collected information for other types of evaluation. However, those that did respond reported that they collected information on knowledge gained for more than half of courses offered and did not collect information on the impact on individual behavior or return on investment for any courses offered. Education officials said they plan to begin collecting information on the impact of the commercial vendor-provided courses that make up their grants management training certificate on individual and organizational performance in the summer of 2013. To assess this training, Education expects to distribute a survey to training participants as well as their supervisors. Based on a prototype we obtained for review, the survey will ask about the extent to which training has been applied on the job as well as the courses’ impacts on grants management performance and achieving organizational grants management goals. In addition to measuring the impact of individual courses, Education officials told us that they measure the impact of training on performance through multi-year analysis of the results of the biennial mission critical occupations competency assessment. The assessment measures the current and desired proficiency of employees in particular competencies and allows Education to target proficiency gaps for closure. According to these officials, assessment of proficiency gaps helps them to understand the developmental needs of the workforce and then provide training that targets these needs. By examining changes in workforce proficiencies over multiple years, Education hopes to assess the effectiveness of its training curricula. HHS is the largest federal grant-making agency in terms of total federal grant funds, administering approximately $333 billion in grants during fiscal year 2012. Formula, block, and project grants are administered across 13 operating divisions within HHS (see table 3). These grants support medical services and research including Medicaid, Temporary Assistance for Needy Families, Head Start, health centers, medical research, and preventive health services. Grants training is managed by the operating divisions, with general training standards and guidance offered through the Office of Grants and Acquisition Policy and Accountability’s division of grants. Training in grants management is required for at least part of the grant workforce in 11 of 13 operating divisions, and these requirements range from a set of courses to professional development activities toward earning a certification in grants management. HHS officials in each of the 13 operating divisions that administer grants, as well as officials in the agency’s Division of Grants, estimated the total grant workforce to be approximately 2,166 employees, out of a total workforce of over 86,500. All HHS grant-making operating divisions have at least some employees classified in the grants management occupational series (1109). These grants management specialists work with program specialists to manage grants. HHS does not have agency-wide competencies for the grant workforce, however several operating divisions at HHS reported having grants management competencies to support their training efforts. These competency models vary across each operating division. One operating division, the Centers for Medicare & Medicaid Services (CMS), reported applying OPM’s grants management competency model directly, while several others had a subset of competencies similar to that included in OPM’s model. Three agencies, the Health Resources and Services Administration (HRSA), the Centers for Disease Control and Prevention (CDC), and the Office of the Assistant Secretary for Preparedness and Response (ASPR) identified separate competency models for program specialists. At HHS, officials in several operating divisions told us that they use both competencies and individual development plans or assessments to determine training needs. For example, HRSA conducted a training needs assessment that highlighted areas for additional training in grants management. Based on this and the identification of program specialists as being critical to the grants management process, HRSA developed a training program for program specialists that will support a certification requirement that officials expect to implement in fiscal year 2013. HRSA officials told us that they are working toward developing a similar program for grants management specialists. According to agency officials, HHS does not require any agency-wide grants training, however, all operating divisions at HHS reported offering grants training to their grant workforce with 85 percent (11 of 13) of HHS operating divisions requiring some amount of training. Training courses offered include general grants management training courses that cover topics such as OMB circulars, cost principles, and monitoring. In addition, the National Institutes of Health (NIH) and HRSA have developed customized training specific to their operating divisions and grants. According to officials and documentation, NIH requires a month-long orientation to grants management and HRSA developed a three-course introduction to grants management. Several operating divisions offer informal training courses, webinars or other sessions to address current topics or specific issues. For example, the Administration for Children and Families (ACF) reported offering several informal courses including one on child welfare/child support and another on Temporary Assistance for Needy Families fiscal management. While HHS developed policy and guidance that required agency-wide certification of its grant workforce in 1995, only four of its operating divisions have taken steps to implement it. Two of the 13 HHS operating divisions—NIH and the Indian Health Service—required grants management specialists to be certified in grants management, while ACF offers its certification program as an option to grants management specialists. Officials in the Food and Drug Administration have developed a proposal for a similar certification program and told us that they plan to implement it in fiscal year 2014. Each of these certification programs is based upon the agency-wide Grants Management Professional Development program established by HHS in 1995. This program was not widely enforced due to changes in leadership, but the requirement still resides in the agency’s Grants Policy Directives and Awarding Agency Grants Administration Manual. In addition to these four certification programs for grants management specialists, HRSA is developing a required certification program for program specialists that will be fully implemented in fiscal year 2013. Once the program specialist certification is in place, HRSA officials told us that they plan to develop a similar certification for grants management specialists. Finally, five operating divisions require their grants management specialists to complete a set of grants management courses from an external vendor, although they do not have the certification program in place. These same courses are an integral part of the certification program that the four operating divisions described above have implemented or proposed. While the agency-wide certification program has not been fully implemented across all operating divisions, HHS officials told us that they have an agency-wide certification process in place for those meeting the requirements of the highest certification level—an operating division’s chief grants management officer. In addition, in the years after the certification program was developed, officials told us that the agency provided grants training courses both internally through HHS University as well as through an external vendor to make grants training available to the operating divisions. HHS operating divisions, as well as the agency’s Division of Grants, reported that to evaluate their grants training courses offered in fiscal years 2011 and 2012 they collected information on participant satisfaction for a majority of the courses offered, collected information on knowledge gained in over half of the courses offered, and did not collect information on the impact on individual behavior or return on investment for any courses offered. However, officials explained that while they understand that information gathered on participant satisfaction for courses taken by commercial vendors is available from the vendor for review, these results are typically not requested by the agency. The Department of State (State) administered over $511 million in project grants in fiscal year 2012. These grants fund a variety of assistance programs worldwide, including those supporting Democracy, Human Rights and Labor; Weapons Removal and Abatement; Professional, Cultural, and Educational Exchange Programs; Public Diplomacy Programs around the world including Iraq, Afghanistan, and Pakistan; Combating Human Trafficking; and Refugee Assistance, among others (see table 4). At State, training of the grant workforce is managed at the agency level through the Office of the Procurement Executive (A/OPE). A/OPE sets training requirements for the grant workforce and collaborates with the Foreign Service Institute to help ensure that training courses are appropriately developed and delivered. Additional training on financial management of grants is developed by the Office of Federal Assistance Financial Management. Training and certification is required for all grants officers and grants officer representatives. While training for grants is centrally managed, each grant may be managed by a single or multiple bureaus depending on the type of assistance being provided. For instance, for grants that support a particular country or region, such as Iraq Assistance Programs, multiple bureaus may manage the funds to support a range of assistance programs. On the other hand, if the grant supports assistance in a particular program, such as educational exchanges, a single bureau, the Bureau of Educational and Cultural Affairs, may manage most or all of the funds. At State, agency policy states that grants are managed by warranted grants officers who are authorized to award grants up to a specified amount. Grants officers appoint grants officer representatives (GOR) to assist with technical and programmatic aspects of the grant. These two positions correspond to the grants management specialist and program specialist, respectively, as described in our report. In addition, financial management officers assist with fiscal management aspects of the grants process. State estimated that there are approximately 605 grants officers, most of whom are stationed overseas. In addition, 280 positions, including financial management officers, support those grants officers at overseas posts and in bureaus domestically. The GOR certification program was effective as of January 2013 and GORs had up to six months to meet the certification requirements, thus State did not yet have a complete accounting of all certified GORs at the time of publication. A full accounting is expected by late summer 2013 when all GORs are required to be certified and registered in State’s database that tracks both certified grants officers and GORs. The grant workforce consists primarily of Foreign Service officers serving in posts around the world. State officials explained that, at these posts, not all Foreign Service officers work on grants, but those that do are responsible for multiple tasks, including managing grants. As a result, there are a lot of personnel that work on grants, and for many of the 740 Foreign Service officers identified as part of the grant workforce, grant- related tasks are a part-time responsibility. State officials told us that, during the time of this review, they did not use grants management competencies for the purposes of identifying training. Beyond the required training and certification for grants officers and GORs, these officials explained that they assess any additional training needs as needed through management reviews. For instance, after review of grants administration at the Bureau of Educational and Cultural Affairs, it was determined that the grant workforce was lacking expertise in several areas. To address the specific training needs of the bureau’s employees, A/OPE developed training for the bureau that augmented the formal training courses. State requires all grants officers and GORs to complete training in grants management. Course topics include an introduction to grants and cooperative agreements, an introduction to monitoring grants, and ethics. As State officials explained, some of the required training courses were developed within the agency because the general training available from commercial vendors did not meet the needs of their staff. For example, most of State’s grant recipients are foreign organizations, and the training from commercial vendors does not include specific grants management scenarios that staff are likely to encounter with foreign grant recipients. State officials told us that because OMB administrative guidance does not apply to the overseas grant environment the agency trains the grant workforce on agency-specific policies and procedures. After designing its own courses, State was also able to offer the course through a distance- learning system that allowed grants officers and GORs overseas to access the training. In addition to formal training courses, State offers informal training to its grant workforce through a variety of mechanisms. These include informal training sessions and a variety of online information sharing. For example, officials told us that, at quarterly meetings facilitated by State’s central grants policy office, roundtable discussions allow employees to share knowledge and perspectives with their peers on topics such as monitoring grant programs through site visits. Officials told us that State has several methods for sharing information, including an online platform where employees can find the latest information on policy guidance and regulations, grant reports, and grant systems as well as online repositories for frequently asked questions on agency grants policy and financial management of grants. As described in the agency’s grants policy directives, State has implemented two certification programs—one for grants officers and the other for GORs. Both certifications require completion of specific grants training and have continuing education requirements. The grants officer certification also requires certain levels of education or experience. Certified grants officers are warranted to issue federal grant awards up to a specified value depending on the level of certification. As the value of grant award increases, the amount of training required for the certification also increases. All levels of grants officer certification require completion of a certain number of training hours every 3 years in order to renew the certification. GORs must complete two grants management courses before obtaining the grants officer representative certification. After certification the GOR is eligible to be designated to a specific agreement by the grants officers. In addition, all GORs must update their training with additional grants training every 3 years. The grants officer representative certification was implemented in January 2013 to help address inconsistencies in the level of training for grants officer representatives across the agency. Staff completion of both grants certification requirements is tracked in a database maintained by the Office of the Procurement Executive. State officials reported that to evaluate grants training courses offered in fiscal years 2011 and 2012 they collected information on participant satisfaction for all four of the grants management courses offered, collected information on knowledge gained in one of these courses, and did not collect information on the impact on individual behavior or return on investment for any courses offered. Officials indicated that the feedback they received from participants on commercial vendor courses is what prompted them to develop customized courses to reflect State grants management practices and scenarios. The Department of Transportation (DOT) is the second largest federal grant-making agency in total value of federal grant funds, administering approximately $52 billion in grants in fiscal year 2012. Formula and project grants are administered across 10 operating administrations within DOT (see table 5). These grants support the nation’s highways, mass transit systems, railroads, and airports through programs such as Highway Planning and Construction and the Airport Improvement Program. While the Financial Assistance Policy and Oversight Office issues agency-wide communications on general training requirements and works with OMB on government-wide training efforts, grants training is managed by the 10 grant-making operating administrations. The operating administrations have the option of setting grants training requirements, and requirements exist in 7 of the 10 operating administrations. These requirements range from a single course to a set of courses that lead to a grants management certificate. In addition to grants training courses required or offered, other training opportunities offered by operating administrations to address grant issues include online webinars, internal brown bags, working groups, and on-the-job training. DOT officials in each of the 10 grant-making operating administrations estimated the total grant workforce to be approximately 1,313 employees and the total workforce at the agency is over 57,000 employees. The agency has only one employee classified in the 1109 grants management series because it employs program specialists to manage the entire grants process. However, in the course of our review, one operating administration, the Federal Motor Carrier Safety Administration (FMCSA), established a grants management office and hired employees in the 1109 grants management series to support this office. Program specialists at DOT are classified in more than 30 different job series (see appendix III for a complete breakdown of the DOT grant workforce by job series). While DOT does not have agency-wide competencies for the grant workforce, operating administrations use other methods to identify training needs for the grant workforce. For example, officials at the Office of Airports in the Federal Aviation Administration (FAA) told us that they use the FAA Human Resource Policy Manual and a standardized job analysis tool to help supervisors identify and prioritize appropriate training for employees’ individual development plans. In broader efforts to assess training needs, a Federal Transit Administration official told us that they have contracted with a private company to conduct a comprehensive training needs assessment and develop a 3-year plan to address the training needs. One operating administration, the National Highway Traffic Safety Administration (NHTSA), told us that they have draft grants management competencies that are scheduled for finalization by the end of fiscal year 2013. According to NHTSA officials, the competencies will be used to conduct periodic assessments, identify training needs, and develop individual development plans. According to agency officials, DOT does not require any agency-wide grants training, however, all of the grant-making operating administrations at DOT offer grants training to their grant workforce and 70 percent (7 of 10) of the operating administrations require some amount of grants training. Training courses offered, according to operating administration officials, include general grants management training, such as courses on OMB circulars, costs principles, and monitoring, as well as customized training specific to the operating administrations’ grant programs, including FMCSA’s 3-day grants management manual training, the Federal Highway Administration’s (FHWA) online training courses on planning and research grants, and the Office of Airports’ 3-day course on airport financial assistance, provided by the FAA Academy. External vendors as well as the agency provide grants training at the operating administrations. Agency-provided grants training is developed by subject- matter experts within the operating administrations or by agency-affiliated training institutes. For example, for FHWA, the National Highway Institute has developed a series of web-based and classroom courses on planning and research grants. Topics covered in this series include common grant rules, grant administration, cost principles, and audits. DOT employs program specialists to manage the entire grant process and, operating administrations provide training on both programmatic and technical aspects of grants as well as on general grants management. For example, officials at FAA’s Office of Airports told us that their grant workforce needed highly specific technical and professional knowledge, skills, and abilities related to airport planning, engineering, construction, and operational and environmental matters. The Office of Airports used courses from the FAA Academy to provide training that hones the technical knowledge of its grant workforce. DOT operating administrations use methods other than formal courses to train the grant workforce. For example, a FMCSA official said that they have conducted webinars on specific grant issues such as audits and indirect costs. These webinars are recorded and posted to the operating administration’s website for others to view. In another example, a FHWA official told us that in addition to formal classroom instruction they encourage other training methods such as on-the-job training, working one-on-one with a team leader, and self-paced learning through instructional resources, such as the DOT Financial Assistance Guidance Manual, on agency websites. According to agency officials, DOT did not have an agency-wide requirement of a grants management certification for the grant workforce. Also, although some operating administrations offered training that leads to a grants management certificate, none of the operating administrations required certification for their grant workforce. DOT operating administrations reported that to evaluate their grants training courses offered in fiscal years 2011 and 2012 they collected information on participant satisfaction for a majority of the courses offered, collected information on knowledge gained in less than half of the courses offered, collected information on the impact on individual behavior for seven grants training courses, offered by NHTSA, and did not collect information on return on investment for any courses offered. In order to assess the impact of training on individual behavior, officials at NHTSA told us that for grants training courses provided by the Transportation Safety Institute, course participants receive an evaluation form 90 days after they take the course. This evaluation asks, among other questions, what the participants have learned from the courses that has impacted their performance at work and if the courses have better equipped them to plan, develop, and implement highway safety programs. The information collected through the evaluations is not systematically analyzed beyond a basic review although the comments are considered when updating course curricula, according to NHTSA officials. The following competency model was released by OPM in September 2009. According to OPM officials, the model was developed using their standard approach, known as the Multipurpose Occupational Systems Analysis Inventory–Close Ended method. This included several procedures such as announcing the intent to study competencies for the grant workforce, collecting information from agencies, performing environmental scans, speaking with subject-matter experts to refine the competencies and job tasks, and surveying the workforce on which competencies are appropriate for different grades of the grant workforce. In addition, the officials told us that they worked with OMB’s Grants Policy Committee to help ensure that the survey employed a common language that the grant workforce across the federal government would understand. In addition to the contact named above, Peter Del Toro, Assistant Director; Erin Saunders Rath; and Shelby Kain made key contributions to all aspects of the report. Vida Awumey, Thomas Beall, Penny Berrier, Jill Center, Gus J. Crosetto, Beryl H. Davis, R. Eli DeVan, Debra A. Draper, Alexandra Edwards, Robert Gebhart, George Guttman, Melissa King, Belva M. Martin, Kimberly McGatlin, Rebecca Rose, Michelle B. Rosenberg, and Stuart Kauffman also provided assistance. In addition, Amy Bowser provided legal support and Donna Miller developed the report’s graphics.
Grants are a key tool used by the federal government to achieve a wide variety of national objectives. However, there are no government-wide training standards or requirements for the federal grant workforce. COFAR has reported it plans to develop such standards. GAO was asked to describe how the grant workforce is trained and what challenges and good practices exist. This report (1) describes the federal grant workforce at selected agencies and analyzes the challenge of identifying the workforce government-wide and (2) examines selected good practices agencies use and challenges, if any, in grants training and the potential implications for developing government-wide grants training standards. GAO obtained government-wide information on grants training through a questionnaire to chief learning officers at 22 federal agencies. For in-depth illustrative examples of grants training practices and challenges, GAO selected four agencies--Education, HHS, State, and DOT--based on factors such as total grant obligations and the number and type of grant programs administered. GAO also reviewed documentation and interviewed officials at OMB and OPM. Identifying the federal grant workforce presents challenges due to differences in how agencies manage grants and the wide range of job series that make up the grant workforce. Some agencies manage grants by using a combination of program specialists (subject-matter experts) and grants management specialists, while other agencies use program specialists to manage the entire grant process. In the four agencies that GAO focused on for this review--the Departments of Education (Education), Health and Human Services (HHS), State (State), and Transportation (DOT)--agency officials identified over 5,100 employees who were significantly involved in managing grants, spanning more than 50 different occupational job series. Recognizing the need for a classification that would more accurately capture the work of federal employees who manage grants, in 2010 the Office of Personnel Management (OPM) created the "Grants Management Specialist" job series. However, due to the different ways that agencies manage grants, the extent to which agencies have adopted this series varies widely. More than half of the 22 federal grant-making agencies GAO surveyed make limited or no use of the job series. The Council on Financial Assistance Reform (COFAR), established by the Office of Management and Budget (OMB) in October 2011 to provide recommendations on grants policy and management reforms, has announced plans to develop government-wide grants training standards, but it has not released information on how it plans to define the grant workforce. Defining the grant workforce is an important step in developing an effective government-wide grants training strategy. Agency officials identified three key practices to develop the grant workforce: (1) competencies, (2) agency-specific training, and (3) certification programs. First, some agencies developed their own competency models in order to better reflect the way they assigned grants management responsibilities. Officials at these agencies told GAO that OPM's grants management competency model was not directly applicable to employees carrying out the program specialist role in their organizations. For example, rather than apply OPM's competency model, a component of HHS developed a separate competency model tailored to program specialist employees responsible for managing grants. Second, agencies addressed their grants training needs through courses and other training mechanisms designed to provide knowledge of agency-specific policies and procedures. Officials reported challenges finding grants training that met all the needs of the grant workforce, and responded to this by customizing grants training courses. For example, Education customized commercial courses to include agency-specific policies and procedures and a component of HHS developed its own grants management courses to achieve the same goal. Third, to ensure a minimum level of proficiency in grants management, some agencies established grants management certification programs and tailored the certifications to fit the different roles within the grant workforce. For example, State tailored separate certification programs after recognizing two distinct roles played by its employees who manage grants. These agencies' experiences have implications for COFAR's plans to develop government-wide training standards, including creating grants management competencies, delivering training for those competencies, and establishing certification standards. GAO is making recommendations to the Director of OMB regarding the importance of including both types of grants management roles--grants management specialists and program specialists--when developing government-wide grants management competencies and certification standards. OMB staff concurred with the recommendations.
The Air Force began development of the B-2A in 1981 and reported on June 30, 1997, after 16 years, that the development and the initial operational test and evaluation had been completed. The Air Force reports of the initial operational tests were completed in November 1997. In 1986, the Air Force estimated that B-2A development could be completed for $14.5 billion, including a 4-year, 3,600-hour flight test program scheduled at that time to end in 1993. The flight test program ended June 30, 1997, and the estimated cost of the development program had grown to over $24 billion and the flight test program to about 5,000 flight test hours over 8 years. The development and testing programs were extended because of Air Force changes in the B-2 requirements and various technical problems. Major changes and problems contributing to the delays included (1) making the B-2A’s primary mission conventional rather than nuclear; (2) redesigning the aircraft to satisfy an added requirement to penetrate adversary air space at low altitudes; (3) difficulty in manufacturing test aircraft, resulting in late delivery of partially complete test aircraft; (4) difficulties achieving acceptable radar cross-section readings on test aircraft, which resulted in significant redesigning and retesting of certain components; and (5) correction of deficiencies in the aft deck structure because of the unanticipated effects of engine exhaust. Even though numerous problems hindered the scheduled completion of B-2A development, production began with no flight testing having been completed. This resulted in substantial overlap of development and production. Test and production aircraft were delivered that did not fully meet the Air Force requirements, and a 5-year post-delivery modification program was initiated to update all aircraft to the block 30 configuration. Since production began in 1986, the planned number of B-2As was reduced from 133 to 21 aircraft and both the total development and the average unit procurement costs increased. Table 1 shows the change in estimated total and unit cost from 1986 to 1998. The last two of the 21 B-2As were delivered to the Air Force in the block 30 configuration. The major effort remaining in the B-2A acquisition program is modification of the other 19 B-2As to the block 30 configuration, scheduled for completion in July 2000. Through April 1998, six B-2As have been delivered in, or modified to, the block 30 configuration and were operational at Whiteman Air Force Base, Missouri. Ultimately, the Air Force plans to have 21 B-2As, of which 16 will be available for missions (2 squadrons of 8 aircraft), and 5 will be in various maintenance and repair cycles. To test the operational performance of the B-2A, the Air Force measured B-2A performance against five broad operational objectives that were derived from documented Air Force operational requirements and concepts related to nuclear and conventional missions. Figure 1 identifies these operational objectives and the key elements of each that were included in the operational testing. Although test results indicate that B-2As generally met operational objectives, four deficiencies were identified during testing that will limit or, under some circumstances, change the planned concepts for using the B-2As and slow its operational pace. These relate to mission planning, defensive avionics, low observable materials, and deployment. As the B-2A matures, numerous minor problems identified in the test reports are scheduled to be corrected or improved based on their relative priorities. These include corrections of minor software and hardware deficiencies, improvements to make crew operations easier or faster, improvements of selected radar modes, and relocation of certain buttons or displays. The corrections and improvements involve flight operation as well as maintenance and support of the aircraft. Ground mission planning, which is still in development, is important to the successful employment of the B-2A because very precise mission routes must be planned to maximize the benefits of the aircraft’s low observable features. Mission planning for the B-2A, done with the automated Air Force Mission Support System (AFMSS), currently takes more time than planned. This will limit the Air Force’s ability to rapidly strike targets and sustain operations. The goal of the AFMSS development program is to produce a mission planning system that can provide specific B-2A mission plans in 8 hours. Testing as of June 30, 1997, concluded that the system frequently malfunctioned, was not flexible or user friendly, and was complex and time consuming to use. Air Force operators at Whiteman Air Force Base told us that the developmental version of AFMSS had so many failures that they estimated it would take 60 hours to plan a conventional mission and 192 hours to plan a nuclear mission. AFMSS is an acquisition program separate from the B-2A and is being developed to support all Air Force combat aircraft. Interface of AFMSS with the B-2A began in 1994. According to the operational test report, AFMSS is a complex system made up of separate subsystems developed by different contractors. The Air Force has received various developmental versions of AFMSS subsystems, and additional upgrades to software and hardware are planned in fiscal years 1998 and 1999. The Air Force expects these upgrades to support preparation of mission plans in 8 hours by the third quarter of fiscal year 1999. The Air Force spent over $740 million to develop the defensive system for the B-2A; however, test reports concluded that this system is unsatisfactory. The lack of an effective defensive avionics system could affect the B-2A’s survivability in selected situations because it is supposed to provide B-2A crews with information on the location of threats, both known and unknown that they may encounter during a mission. Limited funds and time are available to correct all the deficiencies in the defensive system. The Air Force plans some software upgrades that are intended to provide the defensive system with a limited but useful capability. Air Force officials said the cost of making the defensive system meet originally planned capability is unaffordable at this time. Air Force officials told us that all the functions originally planned for the system are not required to successfully carry out the planned B-2A missions. The operational test report further stated that, although the defensive system is rated unsatisfactory, the system’s deficiencies do not prevent planning and executing B-2A missions. The test report indicated that the B-2A’s low observability to adversary threat systems permits use of other effective tactics that could ensure its effective employment. The defensive system is supposed to provide the crew information on enemy threat systems to enhance B-2A survivability. Known threat locations are included in computer files prior to the mission. The system is to correlate these with the actual threats as the B-2A flies its mission, but it is also to identify and locate unknown threats that pop-up during a mission. However, this system does not work as planned, limiting the utility of information provided the crew during critical portions of expected B-2 missions. For example, test reports indicate that the defensive system provided inaccurate or cluttered information to the crew and had unacceptably high workloads for the operators. The number and significance of problems with the defensive system were not identified until near the end of the flight test program, leaving Air Force program managers little time to correct problems. Flight testing, where most of the problems were discovered, did not begin for the defensive system until February 1993, almost 4 years after the flight test program started in July 1989 and almost 2 years after other avionics began flight testing in June 1991. According to Air Force officials and an independent review team, several issues contributed to the deficiencies and their discovery late in the developmental and test processes. These reasons included (1) development and testing began late, (2) successful early laboratory tests could not be repeated in flight tests, (3) test results from flight tests were not completely analyzed before tests were continued, (4) the contract provided incentives to move ahead with development rather than correct problems, (5) there was too much confidence that upgrades to computer software would solve the problems, and (6) there were inadequate engineering controls to prevent the overoptimistic view and approach to this development effort. The Air Force’s cost estimate does not include the cost of correcting all deficiencies but does cover some improvements in the defensive system. The Air Force plans to develop software changes that are scheduled to be available for use by 2000, if tests demonstrate the changes are effective in providing a useful capability. Air Force officials indicated some changes have been tested by operational crews with good success. These software changes are intended to provide capabilities that are useful but less than were expected in the original defensive system design. The Air Force believes these changes will meet their requirements. To achieve the original design would require more costly upgrades, including new computer processors. Expensive hardware upgrades are not included in current Air Force plans to enhance the B-2A. Historically, defensive avionics have experienced significant problems during development. The B-1B bomber had serious deficiencies with its defensive avionics and the Air Force is still working to provide an effective defensive capability for the B-1B. Other defensive avionics programs, like the Air Force’s ALQ-135 jammer and the Navy’s Airborne Self-Protection Jammer, also experienced costly development problems. Low observable materials and features on the B-2A frequently fail, requiring high amounts of maintenance. They also have time-consuming and environmentally controlled repair processes and long cure times for the materials repaired. This reduces the time aircraft are available for operational use, which keeps mission capable rates below the Air Force requirement. These problems increase the amount of time it takes to prepare a B-2A for its next combat flight, potentially reducing the number of sorties that could be flown in a given period of time. During operational testing, low observable materials and features accounted for 40 percent of unscheduled maintenance and 31 percent of the maintenance hours to repair the aircraft. Aircraft operating at Whiteman Air Force Base experienced results similar to those in the operational test. During a visit to Whiteman Air Force Base, we observed a block 20 B-2A aircraft after a 10-hour flight. The aircraft had damaged tape, caulk, paint, and heat tiles, all low observable materials. In addition, we observed hydraulic fluid leaks beneath the aircraft that further damaged tape and caulk. The Air Force is incorporating some new low observable tape materials into the block 30 aircraft, which should reduce some maintenance; however, according to Air Force officials, this improvement will not be adequate to achieve the operational pace currently planned for the aircraft. In addition to the frequent failure of these materials, the processes to repair them are time consuming and require an environmentally controlled repair facility. Cure times on some of the low observable tapes and caulks, items that most frequently fail, can be as long as 72 hours, but most materials require 24 or more hours. The poor durability and extensive maintenance required of low observable materials is an important factor keeping the B-2As from achieving desired mission capable rates—the Air Force measure of an aircraft fleet’s availability to perform its assigned missions. At maturity, the Air Force goal for a mission capable rate is 77 percent. On average, the mission capable rate in calendar year 1997, when including the effects of low observable features, was 36 percent, less than half the goal. The Air Force has prepared a comprehensive plan to develop, test, and install new and improved low observable materials, and to improve repair processes, reduce cure times, and develop new diagnostic tools that should allow the B-2A to meet operational requirements. The plan extends through 2005 and shows that funds required for research and development, procurement, and operations and maintenance could total about $190 million, of which $144 million is not in the current cost estimate. The operational test report states that the block 30 B-2A aircraft must be sheltered to protect it from weather and provide a suitable environment in which to maintain low observables. The Air Force is studying options for providing shelters, including the purchase of portable shelters and use of existing facilities. The Air Force plans to buy a portable deployment shelter as a test article to determine if the portable shelters will be adequate to protect and maintain the B-2A’s low observable features. If the Air Force buys the shelters, at a minimum it will require 17—1 training shelter and 1 operational shelter for each of the 16 primary mission aircraft. Air Force officials stated they are dedicated to buying the deployment shelters but have not determined how many shelters are needed to support B-2A deployments or the shelter configuration. In addition, they said funding sources have not been identified, but the shelters will likely cost a total of between $15 and $25 million, depending upon the quantity purchased. Air Force officials said they have begun to practice deploying the B-2A and it is likely additional requirements will be identified when this happens. The Air Force completed one exercise, deploying two B-2As to Guam, in March 1998, and plans two more in 1998. Air Force officials advised us that the B-2As performed well in the March 1998 deployment, but an official report has not been issued on the results as of April 1998. The fiscal year 1999 B-2A cost estimate indicates it will cost $44.3 billion then-year dollars to complete development, procurement, and modification. However, the Air Force will incur additional costs if it plans to correct the deficiencies identified during testing and achieve the full operational capability originally planned for the B-2A. At this time, there is no comprehensive plan that identifies the efforts required to achieve the full B-2A capability, the likely cost of these efforts, or a funding plan. Further, the Air Force has not yet determined all requirements needed to achieve some capabilities. The fiscal year 1999 B-2A cost estimate indicates the cost to complete development, procurement, and modification of the B-2A program is $44.3 billion then-year dollars. Through fiscal year 1998, the Air Force has been appropriated $43.3 billion, or 98 percent. Air Force estimates show the funding required from fiscal years 1999 to 2003 to complete development is $446.7 million and to complete procurement and modifications from fiscal years 1999 to 2005 is $599.4 million. Table 2 shows the major elements of costs for which funding is to be requested in fiscal years 1999 and beyond. As discussed above, testing identified four deficiencies that will require additional costs if the Air Force plans to fully correct all deficiencies. In addition to the cost increases needed for defensive avionics, low observable materials, and support needed for deployment, the Air Force will also incur costs to procure spares to support the nuclear mission of the B-2A. Table 3 shows estimated costs to fix deficiencies that are not in the current cost estimate as well as areas of other potential cost increases not yet fully defined by the Air Force. The Air Force program to upgrade 19 B-2A aircraft to the block 30 configuration is falling behind schedule and further delays are possible. In addition, modified aircraft have been delivered with significant numbers of deficiencies. Air Force officials said Northrop Grumman has not been able to hire adequate numbers of workers; therefore, modifications have been delayed. Both the Air Force and Northrop Grumman were trying to complete modifications based on schedules that were 3 to 6 months ahead of the contract schedule. Because of delays and problems, these accelerated schedules have been discarded. As of April 1998, Northrop Grumman had delivered three modified aircraft later than, and one modified aircraft earlier than, the contract schedule. The Air Force is assessing schedule performance and studying the funding implications of a schedule slip. At this time, the Air Force believes adequate funds are available to complete the modifications. The Air Force is also assessing a planned schedule change that could significantly delay the modification program for one aircraft. This change would be to accommodate the need to provide an aircraft for flight testing planned upgrades. Until the assessment is complete, Air Force officials said it is not possible to determine if there will be a cost impact on the modification program. All four block 30 aircraft delivered from the modification line have a significant number of deficiencies. Air Force officials stated that some of these deficiencies are not operationally critical and will be corrected during regular scheduled maintenance activities. They said a team will be located at Whiteman Air Force Base, Missouri, to correct some of the deficiencies, and others will be corrected during normal aircraft maintenance cycles to maintain the aircraft in active operational service. The four aircraft have from 30 to 46 deficiencies each and, to ensure corrections are made, the Air Force has withheld contractor payments totaling $24.5 million for two of the delivered aircraft. DOD should determine the nature and cost of those efforts that remain to be accomplished to bring the B-2A into compliance with operational requirements established by the Air Force. This report identifies various deficiencies that are unresolved and indicates the Air Force is still identifying other requirements that may require further effort and funding. We recommend that the Secretary of Defense direct the Secretary of the Air Force to identify remaining efforts to achieve full operational capability, the costs to complete these efforts, and the fiscal year funding requirements not currently in the fiscal year 1999 President’s Budget for the B-2A program. We further recommend that this information be provided to Congress with the fiscal year 2000 President’s Budget in the form of a comprehensive plan to complete the B-2A program. In commenting on a draft of this report, DOD partially concurred with the recommendations. DOD stated the B-2A is projected to meet full operational capability by the third quarter of fiscal year 1999 as a “baseline program” within currently programmed funding. DOD, therefore, states no additional reporting is required on baseline requirements. DOD defines the baseline program as being a block 30 aircraft. The DOD position assumes all operational problems discussed in this report will be resolved without additional cost, but, until these deficiencies have been proven to be corrected, some cost uncertainty remains. In addition, as this report points out, the Air Force has accepted the block 30 aircraft with less performance in some areas than originally planned in the baseline program. DOD agreed there is a need to identify to Congress future efforts and funding requirements to upgrade current B-2As. DOD said it is developing a long-range plan for upgrades to the bomber force and that funding requirements will be included in the normal budgeting process. This action is consistent with our recommendations. DOD’s comments are presented in their entirety in appendix I. DOD provided additional technical comments, which have been incorporated in this report, as appropriate. To identify deficiencies with the operational performance of the B-2A, we reviewed key test reports and summaries prepared by the B-2A Combined Test Force, which conducted the developmental test and evaluations, and the Air Force Operational Test and Evaluation Command, which conducted the initial operational test and evaluations. We also reviewed assessments of the B-2A operational testing prepared by the Office of the Secretary of Defense, Operational Test and Evaluation, and we reviewed various program management and engineering reports that summarized performance and testing efforts being conducted on the B-2A program. We interviewed Air Force engineers, test managers, and program management officials to determine the nature and extent of problems that were identified. We also discussed deficiencies identified during testing and current operational experience and performance of operational B-2As with Air Force officials at Whiteman Air Force Base, Missouri. To identify cost issues and plans to correct deficiencies, we reviewed the available planning documents that identified corrective plans and funding requirements for selected deficiencies. We reviewed the B-2A’s program office annual cost and budgetary estimates, financial and management reports, contract cost reports, program schedules and plans, and other documents. We also interviewed Air Force officials in the B-2A program and at Air Combat Command to determine cost and funding plans to correct deficiencies and complete efforts necessary to provide fully operational B-2A aircraft. To identify the status of the block 30 modification schedule, we reviewed the contract and planning schedules for the block 30 modification process, delivery documents identifying the delivery date and number of deficiencies on the delivered aircraft, and reports showing planned and actual manning at the contractor’s modification facility. We also discussed with Air Force managers of the modification process, the reasons for delayed deliveries, changing schedules, and the plans to correct remaining deficiencies. We performed our review from September 1997 to May 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Air Force, the Director of Office of Management and Budget, and other interested parties. We will make copies available to others upon request. Please contact me on (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. B-2 Bomber: Cost and Operational Issues (GAO/NSIAD-97-181, Aug. 14, 1997). B-2 Bomber: Status of Efforts to Acquire 21 Operational Aircraft (GAO/NSIAD-97-11, Oct. 22, 1996). B-2 Bomber: Status of Cost, Development, and Production (GAO/NSIAD-95-164, Aug. 4, 1995). B-2 Bomber: Cost to Complete 20 Aircraft Is Uncertain (GAO/NSIAD-94-217, Sept. 8, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the total acquisition costs of the B-2A bomber, focusing on: (1) deficiencies that must be corrected to achieve Air Force objectives for the B-2A; (2) additional costs to correct the deficiencies; and (3) the B-2A modification schedule. GAO noted that: (1) the Air Force evaluated the B-2A capability to meet several broad objectives--strike rapidly, sustaining operations, deploy to forward locations, survive in hostile environments, and accurately deliver weapons; (2) the November 1997 operational test reports concluded that B-2As, in the block 30 configuration, are operationally effective, but with several important deficiencies that limit the aircraft's ability to fully meet those objectives as planned; (3) the test reports identify four deficiencies: (a) incomplete development of the automated ground mission planning system, which is needed to rapidly plan and carry out B-2A strike missions; (b) unsatisfactory performance of the defensive avionics system, which is used to provide enemy threat information to the crews and increase their survivability in certain situations; (c) inadequate reliability and maintainability of low observable materials and structures, reducing the ability to sustain the defined pace of operations while maintaining a high degree of survivability for conventional B-2A missions; and (d) lack of environmental shelters to maintain low observable materials and to protect the aircraft from certain weather conditions during deployment; (4) the fiscal year 1999 B-2A cost estimate identifies the cost to complete the B-2A program for the block 30 configuration at $44.3 billion then-year dollars; (5) included in this figure is funding to correct or improve some, but not all, of the deficiencies listed above; (6) for example, the estimate does not include the additional costs that would be incurred if defensive avionics were to be required to achieve the originally planned capability, which Department of Defense officials said is no longer required at this time; (7) however, it does include funding for software upgrades to improve the system performance, which meets current operational objectives; (8) further, it does not include the cost to improve low observable materials, which are needed to sustain the pace of B-2A operations, and to provide for a sufficient number of deployment shelters to accommodate repairs to B-2As; (9) the estimate also excludes costs to buy spare parts that are being identified to support the B-2A's nuclear mission; (10) modifications of B-2As to the block 30 configuration have not been accomplished on schedule; (11) four modified aircraft were delivered as of April 1998--three later than scheduled and one ahead of schedule; and (12) according to the Air Force, the contractor has had difficulty hiring enough personnel to achieve the schedule.
With the passage of ATSA in November 2001, TSA assumed from the Federal Aviation Administration (FAA) the majority of the responsibility for securing the commercial aviation system. Under ATSA, TSA is responsible for ensuring that all baggage is properly screened for explosives at airports in the United States where screening is required, and for the procurement, installation, and maintenance of explosive detection systems used to screen checked baggage for explosives. ATSA required that TSA screen 100 percent of checked baggage using explosive detection systems by December 31, 2002. As it became apparent that certain airports would not meet the December 2002 deadline to screen 100 percent of checked baggage for explosives, the Homeland Security Act of 2002 in effect extended the deadline to December 31, 2003, for noncompliant airports. Prior to the passage of ATSA in November 2001, only limited screening of checked baggage for explosives occurred. When this screening took place, air carriers had operational responsibility for conducting the screening, while FAA maintained oversight responsibility. With the passage of ATSA, TSA assumed operational responsibility from air carriers for screening checked baggage for explosives. Airport operators and air carriers continued to be responsible for processing and transporting passenger checked baggage from the check-in counter to the airplane. Explosive detection systems include EDS and ETD machines. EDS machines, which cost approximately $1 million each, use computer-aided tomography X-rays adapted from the medical field to automatically recognize the characteristic signatures of threat explosives. By taking the equivalent of hundreds of X-ray pictures of a bag from different angles, the EDS machine examines the objects inside of the baggage to identify characteristic signatures of threat explosives. TSA certified, procured, and deployed EDS machines manufactured by two companies, and has recently certified a smaller, less costly EDS machine, which is currently being operationally tested. ETD machines, which cost approximately $40,000 each, work by detecting vapors and residues of explosives. Because human operators collect samples by rubbing bags with swabs, which are then chemically analyzed in the ETD machines to identify any traces of explosive materials, the use of ETD is more labor-intensive and subject to more human error than the automated process of using EDS machines. ETD is used both for primary, or the initial, screening of checked baggage, as well as secondary screening, which resolves alarms from EDS machines that indicate the possible presence of explosives inside a bag. TSA has certified, procured, and deployed ETD machines from three manufacturers. As we reported in February 2004, to initially deploy EDS and ETD equipment to screen 100 percent of checked baggage for explosives, TSA implemented interim airport lobby solutions and in-line EDS baggage screening systems. The interim lobby solutions involved placing stand- alone EDS and ETD machines in the nation’s airports, most often in airport lobbies or baggage makeup areas where baggage is sorted for loading onto aircraft. For EDS in a stand-alone mode (not integrated with airport’s or air carrier’s baggage conveyor system) and ETD, TSA screeners are responsible for obtaining the passengers’ checked baggage from either the passenger or the air carrier, lifting the bags onto and off of EDS machines or ETD tables, using TSA protocols to appropriately screen the bags, and returning the cleared bags to the air carriers to be loaded onto departing aircraft. In addition to installing stand-alone EDS and ETD machines in airport lobbies and baggage makeup areas, TSA collaborated with some airport operators and air carriers to install integrated in-line EDS baggage screening systems within their baggage conveyor systems. Since its inception in November 2001 through September 2004, TSA used its funds to procure and install about 1,200 EDS machines and about 6,000 ETD machines to screen checked baggage for explosives at over 400 airports and to modify airport facilities to accommodate this equipment. For the most part, TSA deployed EDS machines at larger airports and ETD machines at smaller airports, resulting in primary screening being conducted solely with ETD machines at over 300 airports. Table 1 summarizes the location of EDS and ETD equipment at the nation’s airports by airport category, based on a June 2004 TSA inventory listing. The number of machines shown in table 1 includes EDS and ETD machines procured by both TSA and FAA prior to and during the establishment of TSA. Although TSA made significant progress in fielding this equipment, TSA used most of its fiscal years 2002 through 2004 funds for its checked baggage screening program to design, develop, and deploy interim lobby screening solutions rather than install more permanent in-line EDS baggage screening systems. During our site visits to 22 category X, I, and II airports, we observed that in most cases, TSA used stand-alone EDS machines and ETD machines as the primary method for screening checked baggage. Generally, this equipment was located in airport lobbies and in baggage makeup areas. In addition, in our survey of 155 federal security directors, we asked the directors to estimate, for the 263 airports included in the survey, the approximate percentage of checked baggage that was screened on or around February 29, 2004, using EDS, ETD, or other approved alternatives for screening baggage such as positive passenger bag match or canine searches. As shown in table 2, the directors reported that for 130 large to medium-sized airports in our survey (21, 60, and 49 category X, I, and II airports, respectively), most of the checked baggage was screened using stand-alone EDS or ETD machines. The average percentage of checked baggage reported as screened using EDS machines at airports with partial or full in-line EDS capability ranged from 4 percent for category II airports to 11 percent for category X airports. In addition, the directors reported that ETD machines were used to screen checked baggage 93 to 99 percent of the time at category III and IV airports, respectively. Stand-alone EDS and ETD machines are both labor- and time-intensive to operate since each bag must be physically carried to an EDS or ETD machine for screening and then moved back to the baggage conveyor system prior to being loaded onto an aircraft. With an in-line EDS system, checked baggage is screened within an airport’s baggage conveyor system, eliminating the need for a baggage screener or other personnel to physically transport the baggage from the check-in point to the EDS machine for screening and then to the airport baggage conveyor system. Further, according to TSA officials, ETD machines and stand-alone EDS machines are less efficient in the number of checked bags that can be screened per hour per machine than are EDS machines that are integrated in-line with the airport baggage conveyor systems. As shown in table 3, as of October 2003, TSA estimated that the number of checked bags screened per hour could more than double when EDS machines were placed in-line versus being used in a stand-alone mode. In January 2004, TSA, in support of its planning, budgeting, and acquisition of security screening equipment, reported to the Office of Management and Budget (OMB) that the efficiency benefits of in-line rather than stand- alone EDS are significant, particularly with regard to bags per hour screened and the number of TSA screeners required to operate the equipment. According to TSA officials, at that time, a typical lobby-based screening unit consisting of a stand-alone EDS machine with three ETD machines had a baggage throughput of 376 bags per hour with a staffing requirement of 19 screeners. In contrast, TSA estimated that approximately 425 bags per hour could be screened by in-line EDS machines with a staffing requirement of 4.25 screeners. In order to achieve the higher throughput rates and reduce the number of screener staff needed to operate in-line baggage screening systems, TSA (1) uses a screening procedure known as “on-screen alarm resolution” and (2) networks multiple in-line EDS machines together, referred to as “multiplexing,” so that the computer-generated images of bags from these machines are sent to a central location where TSA screeners can monitor the images of suspect bags centrally from several machines using the on- screen alarm resolution procedure. When an EDS machine alarms, indicating the possibility that explosive material may be contained in the bag, the on-screen alarm resolution procedure allows screeners to examine computer-generated images of the inside of a bag to determine if suspect items identified by the EDS machines are in fact suspicious. If a screener, by viewing these images, is able to determine that the suspect item or items identified by the EDS machine are in fact harmless, the screener is allowed to clear the bag, and it is sent to the airline baggage makeup area for loading onto the aircraft. If the screener is not able to make the determination that the bag does not contain suspicious objects, the bag is sent to a secondary screening room where the bag is further examined by a screener. In secondary screening, the screener opens the bag and examines the suspect item or items, and usually swabs the items to collect a sample for analysis using an ETD machine. TSA also uses this on-screen alarm resolution procedure with stand-alone EDS machines. A TSA official estimated that the on-screen alarm resolution procedure with in-line EDS baggage screening systems will enable TSA to reduce by 40 to 60 percent the number of bags requiring the more labor-intensive secondary screening using ETD machines. In estimating the potential savings in staffing requirements, TSA officials stated that they expect to achieve a 20 to 25 percent savings because of reductions in the number of staff needed to screen bags using ETD to resolve alarms from in-line EDS machines. TSA also reported that because procedures for using stand-alone EDS and ETD machines require screeners to lift heavy baggage onto and off of the machines, the interim lobby screening solutions used by TSA led to significant numbers of on-the-job injuries. In addition, in responding to our survey about 263 airports, numerous federal security directors reported that on-the-job injuries related to lifting heavy baggage onto or off the EDS and ETD machines were a significant concern at the airports for which they were responsible. Specifically, these federal security directors reported that on-the-job injuries caused by lifting heavy bags onto and off of EDS machines were a significant concern at 65 airports, and were a significant concern with the use of ETD machines at 110 airports. To reduce on-the-job injuries, TSA has provided training to screeners on proper lifting procedures. However, according to TSA officials, in-line EDS screening systems would significantly reduce the need for screeners to handle baggage, thus further reducing the number of on-the-job injuries being experienced by TSA baggage screeners. In addition, during our site visits to 22 large and medium-sized airports, several TSA, airport, and airline officials expressed concern regarding the security risks caused by overcrowding due to ETD and stand-alone EDS machines being located in airport lobbies. The location of the equipment resulted in less space available to accommodate passenger movement and caused congestion due to passengers having to wait in lines in public areas to have their checked baggage screened. TSA headquarters officials also reported that large groups of people congregating in crowded airport lobbies increases security risks by creating a potential target for terrorists. The TSA officials noted that crowded airport lobbies have been the scenes of terrorist attacks in the past. For example, in December 1985, four terrorists walked to the El Al ticket counter at Rome’s Leonardo DaVinci Airport and opened fire with assault rifles and grenades, killing 13 and wounding 75. On that same day, three terrorists killed three people and wounded 30 others at Vienna International Airport. Airport operators and TSA are taking actions to install in-line EDS baggage screening systems because of the expected benefits of these systems. Our survey of federal security directors and interviews with airport officials revealed that 86 of 130 category X, I, and II airports (66 percent) included in our survey either have, are planning to have, or are considering installing in-line EDS baggage screening systems throughout or at a portion of their airports. As of July 2004, 12 airports had operational in-line systems airportwide or at a particular terminal or terminals, and an additional 45 airports were actively planning or constructing in-line systems. Our survey of federal security directors further revealed that an additional 33 of the 130 category X, I, and II airports we surveyed were considering developing in-line systems. While in-line EDS baggage screening systems have a number of potential benefits, the total cost to install these systems is unknown, and limited federal resources have been made available to fund these systems on a large-scale basis. In-line baggage screening systems are capital-intensive because they often require significant airport modifications, including terminal reconfigurations, new conveyor belt systems, and electrical upgrades. TSA has not determined the total cost of installing in-line EDS baggage screening systems at airports that it had determined need these systems to maintain compliance with the congressional mandate to screen all checked baggage for explosives using explosive detection systems, or to achieve more efficient and streamlined checked baggage screening operations. However, TSA and airport industry association officials have estimated that the total cost of installing in-line systems is—a rough order- of-magnitude estimate—from $3 billion to more than $5 billion. TSA officials stated that they have not conducted a detailed analysis of the costs required to install in-line EDS systems at airports because most of their efforts have been focused on deploying and maintaining a sufficient number of EDS and ETD machines to screen all checked baggage for explosives. TSA officials further stated that the estimated costs to install in-line baggage screening systems would vary greatly from airport to airport depending on the size of the airport and the extent of airport modifications that would be required to install the system. While we did not independently verify the estimates, officials from the Airports Council International-North America and American Association of Airport Executives estimated that project costs for in-line systems could range from about $2 million for a category III airport to $250 million for a category X airport. TSA and airport operators are relying on LOI agreements as their principal method for funding the modification of airport facilities to incorporate in- line baggage screening systems. As of January 2005, TSA had issued eight LOIs to reimburse nine airports for the installation of in-line EDS baggage screening systems for a total cost of $957.1 million to the federal government over 4 years. In addition, TSA officials stated that as of July 2004, they had identified 27 additional airports that they believe would benefit from receiving LOIs for in-line systems because such systems are needed to screen an increasing number of bags due to current or projected growth in passenger traffic. TSA officials stated that without such systems, these airports would not remain in compliance with the congressional mandate to screen all checked baggage using EDS and ETD. However, because TSA would not identify these 27 airports, we were unable to determine whether these airports are among the 45 airports we identified as in the process of planning or constructing in-line systems. TSA officials stated that they also use other transaction agreements as an administrative vehicle to directly fund, with no long-term commitments, airport operators for smaller in-line airport modification projects. Under these agreements, as implemented by TSA, the airport operator also provides a portion of the funding required for the modification. As of September 30, 2004, TSA had negotiated arrangements with eight airports to fund small permanent in-line projects or portions of large permanent in- line projects using other transaction agreements. These other transaction agreements range from about $640,000 to help fund the conceptual design of an in-line system for one terminal at the Dallas Fort-Worth airport to $37.5 million to help fund the design and construction of in-line systems and modification of the baggage handling systems for two terminals at the Chicago O’Hare International Airport. TSA officials stated that they would continue to use other transaction agreements to help fund smaller in-line projects. Airport operators also used the FAA’s Airport Improvement Program— grants to maintain safe and efficient airports—in fiscal years 2002 and 2003 to help fund facility modifications needed to accommodate installing in-line systems. Twenty-eight of 53 airports that reported either having constructed or planning to construct in-line systems relied on the Airport Improvement Program as their sole source of federal funding. Airport officials at over half of the 45 airports that we identified are in the process of planning or constructing in-line systems stated that they will require federal funding in order to complete the planning and construction of these in-line systems. TSA officials also reported that additional airports will require in-line systems to maintain compliance with the congressional mandate to screen 100 percent of checked baggage for explosives. Despite this reported need, TSA officials stated that they do not have sufficient resources in their budget to fund additional LOIs beyond the eight LOIs that have already been issued. The Vision 100—Century of Aviation Reauthorization Act (Vision 100) provided for the creation of the Aviation Security Capital Fund to help pay for, among other things, placing EDS machines in line with airport baggage handling systems. However, according to OMB officials, the President’s fiscal year 2005 budget request, which included the Aviation Security Capital Fund’s mandatory appropriation of $250 million, only supported continued funding for the eight LOIs that have already been issued and did not provide resources to support new LOIs for funding the installation of in-line systems at additional airports. Further, while the fiscal year 2005 Department of Homeland Security (DHS) Appropriations Act provided $45 million for installing explosive detection systems in addition to the $250 million from the Aviation Security Capital Fund, Congress directed, in the accompanying conference report, that the $45 million be used to assist in the continued funding of the existing eight LOIs. Further, the President’s fiscal year 2006 budget request for TSA provides approximately $240.5 million for the continued funding of the eight existing LOIs and does not allocate any funding for new LOI agreements for in-line system integration activities. The fiscal year 2006 Department of Homeland Security appropriations bill passed by the House on May 17, 2005, and the appropriations bill pending before the Senate include, among other things $75 million and $14 million for installation of checked baggage explosive detection systems, respectively. The committee reports accompanying the House and Senate appropriations bills state that the amounts included for installation are in addition to the $250 million mandatory appropriation of the Aviation Security Capital Fund but do not earmark these funds specifically for the installation of in-line EDS systems. In addition, perspectives differ regarding the appropriate role of the federal government, airport operators, and air carriers in funding these capital-intensive in-line EDS systems. Airport operators and TSA have shared in the total costs—25 percent and 75 percent respectively under LOI agreements, which have been TSA’s primary method for funding in- line EDS systems. A 75 percent federal cost-share will apply to any project under an LOI for fiscal year 2005. Further, the President’s fiscal year 2006 budget request for TSA requests to maintain the 75 percent federal government cost share for projects funded by LOIs at large and medium airports. For fiscal year 2006 appropriations for DHS, both the Senate, in its pending appropriations bill, and the House, in its committee report, also propose to maintain the 75 percent federal cost share for LOIs. However, in testimony before Congress, an aviation industry official expressed a different perspective regarding the cost sharing between the federal government and the aviation industry for installing in-line checked baggage screening systems. Testifying in July 2004, the official said that airports contend that the cost of installing in-line systems should be met entirely by the federal government, given its direct responsibility for screening checked baggage, as established by law, in light of the national security imperative for doing so, and because of the economic efficiencies of this strategy. Although the official stated that airports have agreed to provide a local match of 10 percent of the cost of installing in-line systems at medium and large airports, as stipulated by Vision 100, he expressed opposition to the administration’s proposal, which was subsequently adopted by Congress for fiscal year 2005, to reestablish the airport’s cost- share at 25 percent. In July 2004, the National Commission on Terrorist Attacks upon the United States (the 9/11 Commission) also addressed the issue of the federal government/airport cost-share for installing EDS in-line baggage screening systems. Specifically, the commission recommended that TSA expedite the installation of in-line systems and that the aviation industry should pay its fair share of the costs associated with installing these systems, since the industry will derive many benefits from the systems. Although the 9/11 Commission recommended that the aviation industry should pay its fair share of the costs of installing in-line systems, the commission did not report what it believed the fair share to be. At the time of our March 2005 report, TSA has not completed a systematic, prospective analysis of individual airports or groups of airports to determine at which airports installing in-line EDS systems would be cost-effective in terms of reducing long-term screening costs for the government and would improve security. Such an analysis would enable TSA to determine at which airports it would be most beneficial to invest limited federal resources for in-line systems rather than continue to rely on the stand-alone EDS and ETD machines to screen checked baggage for explosives, and it would be consistent with best practices for preparing benefit-cost analysis of government programs or projects called for by OMB Circular A-94. TSA officials stated that they had not conducted the analyses related to the installation of in-line systems at individual airports or groups of airports because they have used available staff and funding to ensure all airports have a sufficient number of EDS or ETD machines to meet the congressional mandate to screen all checked baggage with explosive detection systems. During the course of our review, in September 2004, TSA contracted for services to develop methodologies and criteria for assessing the effectiveness and suitability of airport screening solutions requiring significant capital investment, such as those projects associated with the LOI program. In July 2005, TSA officials stated that TSA and DHS are reviewing a draft report from the study. According to these officials, the study will provide TSA with a strategic plan for its checked baggage screening program, including the best screening solution for airports processing most of the airlines’ baggage volume, and the capital costs and staffing requirements for each solution. Although TSA had not conducted a systematic analysis of cost savings and other benefits that could be derived from the installation of in-line baggage screening systems, TSA’s limited, retrospective cost-benefit analysis of in- line projects at the nine airports with signed LOI agreements found that significant savings and other benefits may be achieved through the installation of these systems. This analysis was conducted in May 2004— after the eight LOI agreements for the nine airports were signed in July and September 2003 and February 2004—to estimate potential future cost savings and other benefits that could be achieved from installing in-line systems instead of using stand-alone EDS systems. TSA estimated that in- line baggage screening systems at these airports would save the federal government about $1 billion compared with stand-alone EDS systems and that TSA would recover its initial investment in a little over 1 year. TSA’s analysis also provided data to estimate the cost savings for each airport over the 7-year period. According to TSA’s data, federal cost savings varied from about $50 million to over $250 million at eight of the nine airports, while at one airport, there was an estimated $90 million loss. According to TSA’s analysis of the nine LOI airports, in-line cost savings critically depend on how much an airport’s facilities have to be modified to accommodate the in-line configuration. Savings also depend on TSA’s costs to buy, install, and network the EDS machines; subsequent maintenance cost; and the number of screeners needed to operate the machines in-line instead of using stand-alone EDS systems. In its analysis, TSA also found that a key factor driving many of these costs is throughput—how many bags an in-line EDS system can screen per hour compared with the rate for a stand-alone system. TSA used this factor to determine how many stand-alone EDS machines could be replaced by a single in-line EDS machine while achieving the same throughput. According to TSA’s analysis, in-line EDS would reduce by 78 percent the number of TSA baggage screeners and supervisors required to screen checked baggage at these nine airports, from 6,645 to 1,477 screeners and supervisors. However, the actual number of TSA screeners and supervisor positions that could be eliminated would be dependent on the individual design and operating conditions at each airport. TSA also reported that aside from increased efficiency and lower overall costs, there were a number of qualitative benefits that in-line systems would provide over stand-alone systems, including: fewer on-the-job injuries, since there is less lifting of baggage when EDS machines are integrated into the airport’s baggage conveyor system; less lobby disruption because the stand-alone EDS and ETD machines would be removed from airport lobbies; and unbroken chain of custody of baggage because in-line systems are more secure, since the baggage handling is performed away from passengers. TSA’s retrospective analysis of these nine airports indicates the potential for cost savings through the installation of in-line EDS baggage screening systems at other airports, and it provides insights about key factors likely to influence potential cost savings from using in-line systems at other airports. This analysis also indicates the merit of conducting prospective analyses of other airports to provide information for future federal government funding decisions as required by the OMB guidance on cost- benefit analyses. This guidance describes best practices for preparing benefit-cost analysis of government programs or projects, one of which involves analyzing uncertainty. Given the diversity of airport designs and operations, TSA’s analysis could be modified to account for uncertainties in the values of some of the key factors, such as how much it will cost to modify an airport to install an in-line system. Analyzing uncertainty in this manner is consistent with OMB guidance. TSA also has not systematically analyzed which airports could benefit from the implementation of additional stand-alone EDS systems in lieu of labor-intensive ETD systems at more than 300 airports that rely on ETD machines, and where in-line EDS systems may not be appropriate or cost- effective. More specifically, TSA has not prepared a plan that prioritizes which airports should receive EDS machines (including machines that become surplus because of the installation of in-line systems) to balance short-term installation costs with future operational savings. Furthermore, TSA has not yet determined the potential long-term operating cost savings and the short-term costs of installing the systems, which are important factors to consider in conducting analyses to determine whether airports would benefit from the installation of EDS machines. TSA officials said that they had not yet had the opportunity to develop such analyses or plans, and they did not believe that such an exercise would necessarily be an efficient use of their resources, given the fluidity of baggage screening at various airports. There is potential for TSA to benefit from the introduction of smaller stand-alone EDS machines—in terms of labor savings and added efficiencies—at some of the more than 300 airports where TSA relies on the use of ETD machines to screen checked baggage. Stand-alone EDS machines are able to screen a greater number of bags in an hour than the ETD used for primary screening while lessening reliance on screeners during the screening process. For example, TSA’s analysis showed that an ETD machine can screen 36 bags per hour, while the stand-alone EDS machines can screen 120 to 180 bags per hour. As a result, it would take three to five ETD machines to screen the same number of bags that one stand-alone EDS machine could process. In addition, greater use of the stand-alone EDS machines could reduce staffing requirements. For example, one stand-alone EDS machine would potentially require 6 to 14 fewer screeners than would be required to screen the same number of bags at a screening station with three to five ETD machines. This calculation is based on TSA estimates that 4.1 screeners are required to support each primary screening ETD machine, while one stand-alone EDS machine requires 6.75 screeners—including staff needed to operate ETD machines required to provide secondary screening. Without a plan for installing in-line EDS baggage screening systems, and for using additional stand-alone EDS systems in place of ETD machines at the nation’s airports, it is unclear how TSA will make use of new technologies for screening checked baggage for explosives, such as the smaller and faster EDS machines that may become available through TSA’s research and development programs. For example, TSA is working with private sector firms to enhance existing EDS systems and develop new screening technologies through its research and development (R&D) efforts. As part of these efforts, in fiscal year 2003, TSA spent almost $2.4 million to develop a new computer-aided tomography explosives detection system that is smaller and lighter than systems currently deployed in airport lobbies. The new system is intended to replace systems currently in use, including larger and heavier EDS machines and ETD equipment. The smaller size of the system creates opportunities for TSA to transfer screening operations to other locations such as airport check-in counters. TSA certified this equipment in December 2004 and is operationally testing the machine at three airports to evaluate its operational efficiency. GAO, Transportation Security R&D: TSA and DHS Are Researching and Developing Technologies, but Need to Improve R&D Management, GAO-04-890 (Washington, D.C.: Sept. 30, 2004). GAO, Homeland Security: Key Elements of a Risk Management Approach, GAO-02-150T (Washington, D.C.: Oct. 12, 2001). development, the agency had not estimated deployment dates, without which managers do not have information needed to plan, budget, and track the progress of projects. We also found that TSA and DHS did not have adequate databases to monitor and manage the spending of the hundreds of millions of dollars that Congress had appropriated for R&D. In moving forward, it will be important for DHS to resolve the these challenges to help ensure that limited R&D recourses are focused on the areas of greatest need. TSA has made substantial progress in installing EDS and ETD systems at the nation’s airports—mainly as part of interim lobby screening solutions—to provide the capability to screen all checked baggage for explosives, as mandated by Congress. With the objective of initially fielding this equipment largely accomplished, TSA needs to shift its focus from equipping airports with interim screening solutions to systematically planning for the more optimal deployment of checked baggage screening systems. Part of such planning should include analyzing which airports should receive federal support for in-line EDS baggage screening systems based on cost savings that could be achieved from more effective and efficient baggage screening operations and on other factors, including enhanced security. Also, for airports, where in-line systems may not be economically justified because of high investment costs, a cost effectiveness analysis could be used to determine the benefits of additional stand-alone EDS machines to screen checked baggage in place of the more labor-intensive ETD machines that are currently being used at the more than 300 airports. In addition, TSA should consider the costs and benefits of the new technologies being developed through its research and development efforts, which could provide smaller EDS machines that have the potential to reduce the costs associated with installing in-line EDS baggage screening systems or to replace ETD machines currently used as the primary method for screening. An analysis of airport baggage screening needs would also help enable TSA to determine whether expected reduced staffing costs, higher baggage throughput, and increased security would justify the significant up-front investment required to install in-line baggage screening. TSA’s retrospective analysis of nine airports installing in-line baggage screening systems with LOI funds, while limited, demonstrated that cost savings could be achieved through reduced staffing requirements for screeners and increased baggage throughput. In fact, the analysis showed that using in-line systems instead of stand-alone systems at these nine airports would save the federal government about $1 billion over 7 years and that TSA’s initial investment would be recovered in a little over 1 year. However, this analysis also showed that a cost savings may not be achieved for all airports. In considering airports for in-line baggage screening systems or the continued use of stand-alone EDS and ETD machines, a systematic analysis of the costs and benefits of these systems would help TSA justify the appropriate screening for a particular airport, and such planning would help support funding requests by demonstrating enhanced security, improved operational efficiencies, and cost savings to both TSA and the affected airport. To assist TSA in planning for the optimal deployment of checked baggage screening systems, we recommended in our March 2005 report that TSA systematically evaluate baggage screening needs at airports, including the costs and benefits of installing in-line baggage screening systems at airports that do not yet have in-line systems installed. DHS agreed with our recommendation, stating that TSA has initiated an analysis of deploying in- line EDS machines and is in the process of formulating criteria to identify those airports that would benefit from an in-line EDS system. DHS also stated that TSA has begun conducting an analysis of the airports that rely heavily on ETD machines as the primary checked baggage screening technology to identify those airports that would benefit from augmenting ETDs with stand-alone EDS equipment. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee have. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512-3404. Individuals making key contributions to this testimony included Charles Bausell, Amy Bernstein, Kevin Copping, Christine Fossett, David Hooper, Noel Lance, Thomas Lombardi, and Alper Tunca. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Mandated to screen all checked baggage using explosive detection systems at airports by December 31, 2003, the Transportation Security Administration (TSA) deployed two types of screening equipment: explosives detection systems (EDS), which use computer-aided tomography X-rays to recognize the characteristics of explosives, and explosives trace detection (ETD) systems, which use chemical analysis to detect traces of explosive material vapors or residues. This testimony discusses (1) TSA's deployment of EDS and ETD systems and the impact of initially deploying these systems, (2) TSA and airport actions to install EDS machines in-line with baggage conveyor systems, and the federal resources made available for this purpose, and (3) actions taken by TSA to optimally deploy checked baggage screening systems. TSA has made substantial progress in installing EDS and ETD systems at the nation's more than 400 airports to provide the capability to screen all checked baggage using explosive detection systems, as mandated by Congress. However, in initially deploying EDS and ETD equipment, TSA placed stand-alone ETD and the minivan-sized EDS machines--mainly in airport lobbies--that were not integrated in-line with airport baggage conveyor systems. TSA officials stated that the agency's ability to initially install in-line systems was limited because of the high costs and the time required for airport modifications. These interim lobby solutions resulted in operational inefficiencies, including requiring a greater number of screeners, as compared with using EDS machines in-line with baggage conveyor systems. TSA and airport operators are taking actions to install in-line baggage screening systems to streamline airport and TSA operations, reduce screening costs, and enhance security. Eighty-six of the 130 airports we surveyed either have, are planning to have, or are considering installing full or partial in-line systems. However, resources have not been made available to fund these capital-intensive systems on a large-scale basis. Also, the overall costs of installing in-line baggage screening systems at each airport are unknown, the availability of future federal funding is uncertain, and perspectives differ regarding the appropriate role of the federal government, airport operators, and air carriers in funding these systems. TSA has not conducted a systematic, prospective analysis to determine at which airports it could achieve long-term savings and enhanced efficiencies and security by installing in-line systems or, where in-line systems may not be economically justified, by making greater use of stand-alone EDS systems rather than relying on the labor-intensive and less efficient ETD screening process. However, at nine airports where TSA has agreed to help fund the installation of in-line baggage screening systems, TSA conducted a retrospective cost-benefit analysis which showed that these in-line systems could save the federal government about $1 billion over 7 years. TSA further estimated that it could recover its initial investment in the in-line systems at these airports in a little over 1 year.
The Federal Acquisition Regulation and Cost Accounting Standards provide overall requirements for allocating incurred costs either directly to a program or indirectly to pools of similar types of costs that are allocated proportionately among the programs. For example, to avoid double counting and ensure that DOE programs and other federal agencies pay an appropriate share of indirect costs, the standards require that a contractor use consistent methods for estimating costs for each project or activity. That is, if the laboratory charges one project $8 per square foot for maintenance, it must charge other projects in the same manner. Contractors submit for DOE approval accounting policy statements describing how they will classify costs as direct or indirect. DOE improved its ability to compare laboratories’ costs in fiscal year 1997, when it began requiring contractors to report all functional support costs, regardless of how they were classified. Functional support cost data facilitate comparisons of laboratories’ costs because they are intended to include all costs that support laboratory missions, regardless of whether a particular laboratory has classified the support costs as direct or indirect. However, functional support costs have limitations in that they cannot account for differences in the mission, size, age, or location of DOE facilities. Facility comparisons need to factor in the differences, for example, in (1) maintenance costs for a 50-year-old manufacturing facility as compared with those of a modern research facility or (2) safety and health costs at a facility that uses nuclear materials as compared with a facility with no nuclear materials. From fiscal years 2000 through 2004, indirect cost rates increased at two of the five DOE laboratories we examined and decreased at the other three laboratories. A laboratory’s indirect cost rates are generally comparable over time, but the indirect cost rates of different laboratories are not comparable because contractors often categorize costs differently. Regarding functional support costs, three of the five laboratories’ rates increased, while rates of two laboratories decreased during the same 5-year period. While functional support cost data help DOE to compare rates across laboratories, several DOE and contractor officials told us that the definitions for some categories are unclear, leading to confusion among categories. Finally, because of the differences in how DOE contractors categorize direct and indirect costs, and because other federal, non-DOE contractors do not collect and report functional support costs, the rates at DOE’s laboratories cannot be compared with those of the Jet Propulsion Laboratory or Lincoln Laboratory. From fiscal years 2000 through 2004, indirect cost rates increased at two laboratories operated by DOE contractors and decreased at three. Los Alamos had the largest increase—10.4 percent. Nearly all of this increase occurred during the 4th quarter of fiscal year 2004 and, according to Los Alamos officials, was attributable to the “stand down” of activities that the laboratory director ordered in July 2004 in response to a series of safety and security incidents. In particular, the Los Alamos officials told us that the stand down resulted in lower program costs without similarly lower indirect costs. For example, $8 million in staff costs for the stand down’s first 2 days was treated as indirect costs as laboratory managers developed plans for assessing and resolving the safety and security issues. Once risk assessment and mitigation activities began, stand-down costs were charged directly to benefiting programs. Los Alamos officials stated that general and administrative costs, especially costs in the “executive direction” category, were higher than expected, while direct program costs were lower. The indirect cost rate for Lawrence Livermore increased as well by 2.9 percent from fiscal years 2000 through 2004 because of additional costs, such as those related to facilities safety, maintenance, environmental protection, and hazard control. In contrast, Idaho National Laboratory’s indirect cost rate decreased by 32.1 percent during the 5-year period, mainly because the contractor reclassified a large portion of its costs associated with environmental cleanup operations from indirect to direct in fiscal year 2004. According to contractor officials, these costs were reclassified in response to DOE’s decision to split the management and operating contract, which expired during fiscal year 2005, into two parts by awarding separate contracts for laboratory operations and the environmental restoration of the site. Laboratories can make changes in how they account for indirect costs, and, when they do, they are required to document these changes in their disclosure statement for DOE approval. The indirect cost rates for Oak Ridge and Sandia decreased by 7.3 percent and 2.2 percent, respectively, between fiscal years 2000 and 2004. According to Oak Ridge contractor officials, the decrease resulted from several factors, including management’s decision to limit the growth of indirect costs while the laboratory’s total spending grew—when total costs increase at a higher rate than indirect costs, the indirect cost rate will decrease because the rate equals the indirect costs divided by total costs. Indirect cost rates cannot be meaningfully compared across laboratories because one contractor may track costs more closely, allowing the contractor to classify a higher proportion of the cost of a support activity, such as administration, as a direct cost than another contractor does. For example, from fiscal years 2000 through 2003, Oak Ridge and Sandia classified administrative support costs as both indirect and direct costs, while Idaho classified all administrative support costs as indirect. Similarly, Idaho classified road and ground maintenance as direct costs, while Lawrence Livermore and Oak Ridge treated them as indirect costs. Two contractor officials provided other examples of costs that laboratories are likely to classify differently: subcontract administration, any type of fringe benefit, program management, organizational management and administration, facility management and maintenance, and information technology functions. A Lawrence Livermore official noted, for example, that one contractor may treat desktop software used by many programs as a direct cost, while another contractor may bundle these software purchases into a common site license that is paid from an indirect account. The indirect cost rate may be higher in the latter case, but the goods may be obtained at a lower cost. Thus, higher indirect costs do not necessarily equate with less efficiency. Further, the five DOE contractors use different methods to classify support activities—that is, they differed in how indirect costs are collected and distributed. For example, one contractor used 4 major indirect cost pools while another contractor used 12 indirect cost pools for distribution among programs. Similarly, the number of major service centers at the five DOE laboratories ranged from 6 to 14. Service centers are accounts where costs of specific services are accumulated and charged on the basis of services rendered, either to a program or to other indirect cost pools. Common service centers are telecommunications and computing centers. Table 1 shows that for fiscal years 2000 through 2004, three laboratories’ functional support cost rates increased and two laboratories’ decreased. These rates are the functional support costs as a percentage of total costs without capital construction. Again, Los Alamos had the largest increase, 8.1 percent, and Idaho had the largest decrease, 11.8 percent. Functional support costs were primarily developed to facilitate the analysis of each facility’s costs. While not intended for comparison purposes, they provide more comparability across laboratories than indirect costs because they are developed on the basis of standard, defined cost categories. However, detailed analysis is required to determine whether rate differences are the result of inefficiencies or other factors, such as differences in each facility’s mission, activities, location, or size. For example, costs for safety and health, maintenance, and utilities at Los Alamos are higher than costs at other sites because, according to DOE officials, the laboratory has 2,224 facilities on 27,800 acres of mesas and canyons. Also, Los Alamos uses plutonium and other hazardous materials, which require added safety procedures, and accelerator facilities, which consume large amounts of electricity. While functional support cost rates facilitate improved analysis of laboratory costs, several DOE and contractor officials told us that the definitions for some categories are somewhat unclear, leading to confusion in how to categorize certain costs. Notably, the “facilities management” and “maintenance” categories are somewhat ambiguous and, hence, are not fully comparable across laboratories. Peer reviewers checking for accuracy of classification of costs have found that several laboratories have misclassified costs between the two categories. In July 2003, peer reviewers found that Sandia had classified $1.1 million of facilities management costs as maintenance and $8.8 million in maintenance costs as facilities management. In June 2004, peer reviewers found that Los Alamos had classified $550,000 in maintenance costs as facilities management. In addition, in fiscal year 2003, a Lawrence Livermore internal review found that the laboratory had categorized plant facility engineering costs as maintenance, while other laboratories had categorized these costs as facilities management. In 2004, after discussing the categories with DOE and contractor officials involved in reviewing functional support cost data, Lawrence Livermore moved $15 million from the fiscal years 2002 and 2003 maintenance category to the facilities management category. Most of the peer reviews for the DOE laboratories and other facilities found difficulties with which cost elements were placed in or omitted from “facilities management” and “maintenance,” according to our analysis. In addition, peer reviews at Los Alamos and Oak Ridge found over $2 million of legal or information services costs that was misclassified in the “executive direction” category, another example of a category whose definition may be unclear. Idaho officials reported that discrepancies in executive direction cost data between Idaho and other sites resulted from uncertainty about how many levels of management or what type of site development and strategic planning costs are to be included in the executive direction category. Differences in interpretation result from insufficiently detailed guidance for developing functional support costs. The guidance primarily consists of 10 pages of 22 support category definitions. DOE’s Web site does not have more detailed instructions that contractors can turn to when they are uncertain whether a cost should be classified under one category or another. Contractor officials developing cost data often turn to different DOE or contractor officials with responsibility for these data for help, increasing the likelihood of getting different advice, despite the fact that consistency is key to data quality. Several DOE and contractor officials with responsibility for these data agreed that more specific guidance would cost little to develop and would increase consistency in reporting. For example, some officials said they could post a list of common laboratory errors on the Financial Management Systems Improvement Council’s Web site, based on peer review findings of the past few years. Because comparisons of indirect cost rates are not very meaningful and non-DOE contractors do not report functional support costs, the cost rates for DOE’s laboratories cannot be compared with those of the Jet Propulsion Laboratory or Lincoln Laboratory. The Jet Propulsion Laboratory, operated by the California Institute of Technology, is the lead center for robotic exploration of space. Virtually all of the work it performs is under a single National Aeronautics and Space Administration agreement, which states that all of the laboratory’s costs are direct, according to laboratory officials. Lincoln Laboratory, located on Hanscom Air Force Base, conducts applied research to develop advanced technology in remote sensing, space surveillance, missile defense, battlefield surveillance and identification, communications, air traffic control, and biological and chemical defense for the Department of Defense and other federal agencies. The laboratory’s indirect cost rate cannot be compared with those of DOE laboratories without a detailed understanding of differences in (1) contract provisions and other requirements; (2) how contractors classify costs as direct or indirect; and (3) research missions and activities, such as the added costs at the DOE laboratories associated with safety requirements for handling radioactive and other hazardous materials. Lincoln Laboratory's indirect cost rate increased by 13.9 percent between fiscal years 2000 and 2004 because of a new enterprisewide accounting and management reporting system and infrastructure improvements for laboratory test facilities, according to laboratory officials. DOE and its contractors have numerous efforts under way to reduce indirect and other support costs; however, we identified several efforts that could be strengthened to further reduce costs. First, DOE is including incentives in its contracts to encourage indirect cost reductions. DOE officials stated that one of these incentives, a pilot to award additional contract years for performance, had produced cost savings. DOE is expanding this incentive to additional laboratories, although it has not evaluated its effectiveness. Second, DOE generally requires contractors to offer employee benefits that are similar in value to those of comparable organizations, but the department has done little to enforce this requirement. Third, DOE has begun requiring contractors to address a backlog of maintenance projects while they also manage current maintenance needs. Although this effort will involve costs in the near term, it could reduce support costs in the long term. However, only Lawrence Livermore and Sandia have programs that have been shown to be sustainable over several years and appear to be promising models. Finally, while DOE and some contractors have reduced costs through process improvement programs, consolidated procurement actions, and audits by DOE’s Inspector General and DOE contractor audit groups, opportunities exist for further cost savings through these activities. Recently, NNSA has taken several actions to improve business operations and achieve support cost savings through contractual incentives. For example, Sandia’s management and operating contract that NNSA extended to the Sandia Corporation, a subsidiary of Lockheed Martin Corporation, in October 2003 gives higher priority to improved performance and greater efficiency in business operations. Specifically, 40 percent of the contract’s annual award fee in fiscal years 2004 and 2005 is based on Sandia’s performance in areas such as information technology, procurement, human resources, and maintenance. Similarly, NNSA’s management and operating contracts for Lawrence Livermore and Los Alamos have given greater emphasis to improved performance and greater efficiency in business operations. For example, the Los Alamos contract’s performance measures that focused on business operations increased from about 28 percent in fiscal year 2002 to 40 percent in fiscal year 2005. NNSA also has added provisions to some of its contracts to allow the laboratories to reinvest cost savings in other activities considered to be indirect costs. In addition, the Sandia contract initiated a pilot award-term program under which an additional year may be awarded to the life of the contract for each year the contractor achieves an overall outstanding performance rating. A key performance target is finding sufficient cost savings to be applied to unfunded projects. In fiscal year 2004, the first year of the pilot program, Lockheed Martin earned a 1-year extension on its contract and documented $38 million in cost savings, which it spent on agreed-upon projects, such as the following: $14 million for reprogramming security and safeguards to meet the new $9.8 million for investing in computer clusters for defense projects, $3 million for purchasing equipment to refurbish the pulsed power $2 million for enhancing the classified network, $2 million for cleaning up beryllium contamination, and $0.3 million for negotiating an agreement with Russia on polymer research. Although Sandia and NNSA officials stated that they believe the award- term program emphasizes improved performance and cost savings better than provisions in prior contracts, NNSA has not evaluated the nearly 2-year-old pilot. Such an evaluation could compare the benefits of redirecting funds for better mission uses with the costs of forgoing recompetition of the contract and examine whether the cost savings resulted in any negative effects on reduced work quality. The evaluation also could determine whether award-term incentives need to be revised in other contracts to improve their effectiveness and sustainability, particularly since the mission and level of performance among contractors vary. By expanding the incentive without evaluating it, DOE does not know if it is receiving benefits commensurate with awarding extra years to the contract term. Despite the lack of evaluation, DOE’s Office of Science extended the award-term incentive to Lawrence Berkeley National Laboratory, and NNSA plans to extend it to Los Alamos and the Nevada Test Site later this year when it awards new contracts. As a result, Lawrence Berkeley’s contractor, the University of California, can potentially earn up to 15 additional years on its recently awarded 5-year contract; the request for proposals for the Nevada Test Site states that the contractor can potentially earn up to 5 additional years on its 5-year contract; and the request for proposals for Los Alamos states that the contractor can potentially earn up to 13 additional years. (See app. I for more information on this topic.) To ensure that the value of each contractor’s employee benefits are comparable with its competitors and that costs are reasonable, DOE Order 350.1 requires its management and operating contractors to periodically benchmark the value of their employee benefit packages—including retirement pensions, health care, death, and disability—with those of organizations with whom the contractors compete in hiring employees. The DOE order requires that if the value of a contractor’s benefits exceeds the average benchmarked value by more than 5 percent, the contractor will provide DOE with a plan to adjust the benefits so that they fall within 5 percent of the benchmarked value. DOE must ensure that the contractor’s proposed adjustments are acceptable and reasonable. More specifically, the DOE order requires that the contractors use a professionally recognized measure to compare the value of their benefits with those of other organizations. The contractors can use a nationally recognized consulting firm with expertise in benefit value studies to perform such a study every 3 years or perform an annual employee benefit comparison survey through the U.S. Chamber of Commerce. A benefit value study determines the average of each benefit for 15 organizations with similar workforces. The average value of the benefits becomes the benchmark against which the contractor’s benefits are assessed. The benefit value studies conducted for Lawrence Livermore, Los Alamos, and Sandia in 2004 show that the value of employee benefits for these laboratories exceeded the benchmark by more than 5 percent in several of the four primary categories of benefits. More importantly, the studies showed that the overall benefits for those three laboratories exceeded the allowable 5 percent variance for the overall benefits (see table 2). Lawrence Livermore and Los Alamos both had benefit values that far exceeded the benchmark and, in many categories, both laboratories exceeded all comparators. For example, pension benefits for both laboratories exceeded those of all 15 comparators and were nearly twice those of the benchmarked value. Lawrence Livermore, Los Alamos, and Sandia were highest or second highest in most benefit categories. Sandia’s defined benefit pension was second highest of all 15 comparators and exceeded the benchmarked value by 68 percent. In contrast, the value of benefits for Idaho and Oak Ridge did not exceed the 5 percent allowable range. DOE did not require prompt action to adjust benefits at the three laboratories that exceeded the benchmarked value. Initially, DOE and, later, NNSA exempted Lawrence Livermore and Los Alamos from the DOE benchmark because the contract with the University of California, which manages the laboratories, allowed the university to extend its own benefits package to laboratory employees. When the DOE order was issued in 1996, the existing contract with the university took precedence, according to a university document. DOE did not request a benefit value study until 2004, after it was determined that the Los Alamos and Lawrence Livermore contracts were scheduled to be competed and DOE became concerned about long-term liabilities for employees’ postretirement costs. DOE has no short-term liability for the pension benefits because the pension plan is fully funded and is projected to remain fully funded for at least the near term, according to university officials. However, DOE’s liability for long- term pension benefits for these laboratories remains undetermined. In an effort to reduce this liability, NNSA is requiring a stand-alone pension plan for the winning bidder of the Los Alamos contract, which DOE plans to award at the end of this year. Similarly, after a benefit value study in 2001 showed that the value of Sandia’s benefits exceeded the benchmarked value, NNSA did not require Sandia to adjust its benefits. In this case, an actuarial study showed that NNSA had minimal risk that it would have to contribute to Sandia’s pension plan for at least 5 years. However, the amount of long-term liability is again undetermined because the study could not reliably determine the risk of NNSA having to contribute beyond 5 years. When a new benefit value study was completed 3 years later, in May 2004, NNSA required Sandia to submit a corrective action plan to adjust the benefits. NNSA received Sandia’s plan for making adjustments in June 2005, but officials are requiring modifications before approving the plan. Finally, while benchmarking the value of benefits is a step in the right direction, DOE does not require contractors to benchmark the costs of their benefits. NNSA officials stated that cost studies are needed because the value of benefits may not be directly proportional to their costs. For example, while value and cost are generally highly correlated, it is possible that a contractor may negotiate high-value benefits that have a low cost, or low-value benefits that have a high cost. DOE has not yet finalized the revisions to DOE Order 350.1, which includes a draft provision to require benchmarking of costs, in addition to benefits. In commenting on this report, DOE officials stated that, since late 2004, DOE solicitations and awards for management and operating contracts required contractors to conduct benefit value and cost studies. In addition, our April 2004 report (1) noted that DOE should review postretirement costs because they have a continuous and compounding effect as they are paid out for each year of retirement and (2) recommended that DOE strengthen its oversight of postretirement benefits by focusing more attention on long-term costs. For more than 2 decades, DOE orders and policies have required that contractors’ maintenance programs ensure the safe, reliable, and efficient operation of buildings and equipment. They have also required that these programs be adequately funded to ensure that the design requirements of the buildings and equipment are met or exceeded for their operating lives, and that industry standards for maintenance are applied. Industry standards, for example, require that maintenance budgets be developed using historical and other information on the resources required to maintain the structure or equipment in good repair. Industry standards and the National Academy of Sciences have recommended that day-to-day maintenance requires continuous annual funding of about 2 percent to 4 percent of the replacement plant value. Despite requirements, for many decades, DOE and its contractors have neglected the routine maintenance of buildings and equipment—including inspection of fire alarms, upgrades to electrical systems, and testing of equipment critical to the nuclear weapons program. DOE and contractor officials stated that the mission always took priority for resource allocations. This practice has resulted in a maintenance backlog that will cost an estimated $1.9 billion for the five laboratories, excluding deferred maintenance for unused buildings. Maintenance that continues to be deferred, particularly on older structures and equipment, will contribute to an increasing growth in deferred maintenance costs, including replacement costs for certain equipment parts. For example, Lawrence Livermore reports that its deferred maintenance costs continue to escalate each year because of the higher probability of failure in the operability of aging structures and equipment. In fiscal year 2004, Los Alamos reported that it had the oldest structures of the three weapons laboratories, with an average structure age of 33 years. Los Alamos also reported that it had the highest level of deferred maintenance of the five laboratories, accounting for about one-third, or $547 million, of the backlog for the five laboratories. The backlog could also jeopardize the safe, reliable, and efficient operation of the buildings and equipment. At Los Alamos, for example, the backlog put at risk the safety of some workers. Officials inspecting the fire protection system in early fiscal year 2004 reported that the fire sprinklers were not properly winterized to protect them from freezing, as required by fire codes and standards. The inspectors reported that at least three fire sprinkler pipes had frozen since November 2003, creating a safety concern in the event of a fire. At Idaho, the backlog has placed reliable operation of the Advanced Test Reactor at risk. Specifically, the deferral of maintenance and recapitalization for several key systems—including the waste control system and the digital monitoring system—has resulted in the reliance on an outdated computer system for which technical support is no longer available and replacement parts can only be found in used parts markets. Loss of any these key systems could require Idaho to temporarily shut down the Advanced Test Reactor, hindering test plans and DOE’s nuclear energy mission. Idaho plans to continue replacing failing parts in the computer system until it can fund a full system replacement. DOE has begun to focus on deferred maintenance in an effort to reduce the backlog and long-term costs. DOE’s Defense Programs, the predecessor to NNSA, first began to address the maintenance backlog in fiscal year 2000. Defense Programs required the nuclear weapons facilities to develop 10- year plans to evaluate short- and long-term maintenance needs and develop long-term efforts to better manage the maintenance backlog. At about the same time, the Congress began providing funding for certain maintenance programs, such as NNSA’s Readiness in Technical Base and Facilities Program. Some DOE laboratories also are using different types of indirect costs, such as space charges, to help address maintenance. Additionally, in fiscal year 2002, the Congress began funding the Facilities and Infrastructure Recapitalization Program to reduce long-term deferred maintenance for NNSA facilities. The program can also be used to demolish certain structures. Demolishing or consolidating structures can help reduce maintenance and other operating costs. Between fiscal years 2002 and 2005, the Congress provided about $1 billion for this program and, in fiscal year 2005, NNSA requested an additional $1.7 billion for fiscal years 2006 through 2009. When established, the program had two key goals. First, contractors are to stabilize deferred maintenance by the end of fiscal year 2005 so that there is no further growth in backlog. NNSA informed the Congress that contractors have met this goal of stabilizing deferred maintenance. The second goal is for contractors to reduce the maintenance backlog so that the backlog for mission-critical and nonmission-critical structures is less than 5 percent and less than 10 percent, respectively, of replacement plant value by fiscal year 2009. However, an NNSA official stated that most of NNSA’s contractors will not be able to meet this goal because the $1.7 billion budget request for fiscal years 2006 through 2009 was reduced by $574 million. Furthermore, in fiscal year 2003, DOE adopted NNSA’s lead by addressing maintenance in 10-year plans to better manage maintenance overall, including deferred maintenance. As with NNSA’s long-term planning requirements, DOE Order 430.1B requires that all DOE facilities develop 10-year plans to assess, among other things, their short-term and long-term maintenance needs. Although the order does not have specific time lines associated with reducing deferred maintenance, DOE’s headquarters program offices are responsible for approving the 10-year plans and for tracking the performance of the facilities against the plans. DOE’s Office of Engineering and Construction Management has general oversight over DOE Order 430.1B, including the review of the 10-year plans and the tracking of program office performance. Despite all of these efforts, DOE contractors report that deferred maintenance will continue to be a significant problem in the short term and long term. Of the five laboratories, Lawrence Livermore has demonstrated the most sophisticated and sustainable approach that fully funds maintenance needs and reduces deferred maintenance over the long term. Lawrence Livermore’s plan, first implemented in 1998, relies on a combination of funding, including two directly funded NNSA programs— the Readiness in Technical Base and Facilities Program and the Facilities and Infrastructure Recapitalization Program—and an indirect cost pool established for its Maintenance Management Program. The Maintenance Management Program collects funds ($8 per square foot) from all users of its buildings—including NNSA and other DOE and non-DOE programs. About 20 percent of the collected funds are spent on deferred maintenance at the laboratory, in accordance with the program. The program identifies the most critical needs for maintenance on the basis of a matrix balancing “mission-essential” and “probability of failure” criteria. In addition, the charge itself encourages more efficient use of space and the return of unneeded space for use for other purposes or demolition. Also, other Lawrence Livermore cost pools, such as the Institutional General Plant Project, help fund capital improvements that can help address maintenance by upgrading structures and equipment. By managing the various elements of its maintenance effort, Lawrence Livermore stopped the growth of its maintenance backlog by fiscal year 2002. Despite the anticipated reductions in the Facilities and Infrastructure Recapitalization Program, Lawrence Livermore projects that it can reduce its deferred maintenance to within NNSA’s industry standards by 2011. Sandia has an approach similar to Lawrence Livermore’s, relying on a combination of funds for addressing its maintenance and deferred maintenance costs. Using an established methodology, a committee sets maintenance priorities and determines how to spend funds collected from an internal cost recovery pool. Sandia charges building users $12 per square foot and expects to initiate increases to space charges over the next 4 years to compensate for expected cuts in maintenance funding. As a result, despite the anticipated reductions in the Facilities and Infrastructure Recapitalization Program, Sandia projected in March 2005 that it would reduce its deferred maintenance to within NNSA’s industry standards by 2011. Los Alamos officials acknowledge that they cannot achieve NNSA’s goals for reductions in deferred maintenance given the expected cuts in the Facilities and Infrastructure Recapitalization Program in the foreseeable future. Contractor and NNSA officials acknowledge that Los Alamos’ maintenance program is not sustainable, particularly given its current level of funding for maintenance. Any reductions in backlog as a result of the Facilities and Infrastructure Recapitalization Program would reaccumulate when that funding ended. Similarly, even though Oak Ridge provides additional support for maintenance by charging for space used, officials report that without additional infrastructure renewal or recapitalization funds to address aging facilities, the charge is insufficient to fully fund its maintenance needs. Without this additional funding, Oak Ridge reports that its maintenance backlog will continue to grow. Finally, Idaho has passed many of its deferred maintenance liabilities on to another contractor responsible for cleaning up part of the laboratory site; nevertheless, Idaho reports that without additional funding, its maintenance backlog will continue to grow. Idaho, Lawrence Livermore, and Sandia have reduced support costs through programs to improve business and other processes, while Los Alamos and Oak Ridge do not have such programs. In addition, DOE and some of the laboratories we reviewed have some efforts under way to consolidate procurement actions. Finally, DOE’s Inspector General and contractors’ internal audit groups have contributed to some cost savings by auditing support functions. Process improvement is the practice of taking an analytical look at different steps that go into developing a product or delivering a service and assessing how those processes could be conducted better. In a process improvement initiative, participants may define the process and metrics, measure performance, analyze root causes of problems, improve areas of low performance, and develop controls for the process. Typically, the process improvement initiative involves iterative cycles of identification and improvement, fueled by employee participation in identifying problem areas and recommending steps to improve them. It is generally recognized that process improvement is a good business practice. Process improvement programs can increase product or service quality, while decreasing costs. Public and private organizations have reported significant returns on investment through process improvement programs. Several DOE laboratories and production facilities, including Idaho, Sandia, and Lawrence Livermore, have used process improvement methods, such as Six Sigma and Lean Six Sigma, to reduce costs. Six Sigma and Lean Six Sigma are rigorous and disciplined methodologies that use data and statistical analyses to measure and improve the performance of a company’s operations by identifying and eliminating defects in manufacturing and service-related processes. Idaho used Six Sigma to reduce the average cost of 16 Safety Assessment Reports for a total savings of $907,000 from fiscal years 2002 through 2003. Sandia used Lean Six Sigma to streamline its accounts payable purchase order process, potentially leading to savings of more than $1 million over a 5-year period. Finally, Lawrence Livermore used Six Sigma and other methods to improve processes in the areas of safety, security, resource management, and property management. For example, the laboratory reduced security staff overtime by increasing the workday to 12 hours and reported a $2.3 million annual savings. The laboratory also standardized the hours of the perimeter security gates and closure of redundant gates in an effort to save $300,000 annually. Lawrence Livermore plans to train 400 managers by the fall of 2006 in a new process improvement method based on a combination of Lean Six Sigma and a method used at the United Kingdom’s Atomic Weapons Establishment, which provides warheads for nuclear deterrence. Neither Oak Ridge nor Los Alamos have process improvement programs, although some DOE and laboratory officials agreed that such programs can help reduce costs. When laboratories do not have some type of process improvement program, DOE has little assurance that managers are identifying and addressing inefficient or ineffective processes. Many of the DOE Inspector General’s reports have examined support activities at DOE laboratories and other facilities, including security, procurement, property management, information resources, financial management, and financial controls. The Inspector General has found opportunities for reducing indirect and other support costs. For example, in 2003, the Inspector General’s audit of central office expenses for the Thomas Jefferson National Accelerator Facility questioned $4.6 million in costs claimed by and paid to the contractor for central office expenses from November 1999 to September 2002. These questioned expenses included costs that were not allowable, such as alcoholic beverages, and costs that were not adequately supported or documented. The audit resulted in $3.5 million in savings to DOE. In 2003, an audit of Los Alamos reported support and other costs of $14.6 million that were potentially unallowable, including meals, excessive travel costs, and an internal audit function that did not meet DOE requirements. DOE officials are in the process of determining allowability of the questioned amount. In addition to the Inspector General audits, contractors’ internal auditors annually perform an audit of the allowability of costs as claimed by contractors. This audit addresses both indirect and direct costs, identifying whether or not costs are allowable under the terms of the contract. For example, one such audit, conducted at Sandia in fiscal year 2003, projected $112,000 in unallowable costs—DOE is in the process of determining the precise amount. DOE and its management and operating contractors have reduced support and other costs by consolidating their procurements of computer equipment, office supplies, and other bulk items into single contracts that result in volume discounts and the need for fewer purchasing resources. In particular, DOE established an Integrated Contractor Procurement Team that pursues buying opportunities and has negotiated over 40 purchasing agreements with vendors to reduce procurement staff time and obtain favorable prices, according to a DOE official. The team, composed of contractor purchasing agents, surveys sites to determine what contractors are paying for a product and whether they can negotiate a better price. A contractor that wants to purchase goods (e.g., paper) can review a list of agreements and supplier contact information on the Integrated Contractor Procurement Team’s Web site and use its Basic Order Agreement. This saves contractors and suppliers time because they do not need to renegotiate all the terms of the contract. Contractors do not track savings resulting from the use of Integrated Contractor Procurement Team, but they have collected some examples of savings that total about $12 million to $15 million, annually. Other efforts to save time and money through consolidation of procurement actions include the following: NNSA has required Lawrence Livermore, Los Alamos, and Sandia to begin analyzing their spending using similar software so that opportunities for consolidated purchases among the three laboratories can be identified. Lawrence Livermore has hired a specialist who is focused full-time on analyzing the laboratories’ spending patterns and identifying opportunities for consolidation. As a result of opportunities being identified, the procurement office expects to reduce the number of subcontracts by 10 percent, saving procurement staff time. The laboratory also has developed 13 Electronic Ordering System agreements through which suppliers provide electronic catalogs with over 2 million commercial items (e.g., computers, electrical and plumbing supplies, and chemicals) that laboratory employees can purchase online. The purchases are automatically routed through an approval system. Oak Ridge consolidated its new radio system, which is used by security and maintenance staff, with other DOE facilities in the area, reportedly saving $900,000 on purchase prices plus $475,000 in annual costs. Laboratory officials told us that opportunities exist for further cost savings through reduced staff time and better prices by consolidating procurements with each contractor’s parent organization, other DOE facilities, or other federal agencies’ facilities in the same region. Idaho officials cited the potential for reducing costs for using consolidated procurements with nearby military bases and other facilities. An NNSA procurement official noted that laboratories could save through increased use of Integrated Contractor Procurement Team agreements, noting that some procurement officials may not be aware of available opportunities. In an era of federal budget constraints, it is crucial to efficiently manage support costs at DOE laboratories, thereby maximizing funds available for laboratory missions. DOE and its contractors have taken steps to reduce support costs, but additional opportunities exist. To help decision makers analyze support costs across the laboratories, several years ago DOE began to require laboratories to report functional support costs. However, peer reviews have revealed some problems with these data and challenges remain in comparing costs, in part because of ambiguity in the definitions for some categories of support costs. To encourage contractors to reduce support costs, DOE also piloted a program to award contract years for performance improvement and cost savings. However, by expanding the pilot incentive program without evaluating it, DOE does not know if it is receiving benefits commensurate with awarding extra years to the contract term. In another effort to save money, DOE developed requirements to ensure that contractors’ employee benefits are comparable with those of similar organizations with whom they compete for critically skilled staff. However, DOE has not always required its contractors to reduce employee benefits that substantially exceed the value of their competitors’ benefits and has not required contractors to benchmark the costs of their benefits, potentially adding billions of dollars in long-term costs. Furthermore, while DOE has begun to address the $1.9 billion backlog of deferred maintenance to reduce long-term costs and improve the safe, efficient, and reliable operation of equipment and buildings, only Lawrence Livermore and Sandia have demonstrated sustainable approaches that successfully reduced their backlogs. Lastly, process improvement programs are generally considered a good business practice, and three laboratories have reduced costs through their own programs. DOE, however, does not require laboratories to have such programs, and some laboratories have not instituted one. Overall, while DOE has made progress, without additional attention to these initiatives, the department may miss the opportunity to produce significant savings. To improve the quality and comparability of DOE facilities’ support cost data, we recommend that the Secretary of Energy direct the CFO to work with the Financial Management Systems Improvement Council to clarify definitions of functional support cost categories. To determine whether the department receives benefits commensurate with awarding 1 or more extra years to the contract term, we recommend that the Secretary of Energy direct NNSA to evaluate the effectiveness of its pilot award-term program at Sandia National Laboratories, particularly the nature and extent of work quality improvements, prior to extending the program to other laboratories. To provide competitive but economical employee benefits, we recommend that the Secretary of Energy complete the revision of DOE Order 350.1 and ensure that the order (1) extends the requirement to benchmark the value of employee benefits to all contractors; (2) requires prompt corrective action if the value of benefits exceeds the allowable range; and (3) extends the benchmarking requirements to include the costs, as well as the values, of the benefits. To reduce long-term maintenance costs at contractor-operated facilities, we recommend that the Secretary of Energy develop a long-term sustainable approach that meets day-to-day maintenance requirements, reduces the maintenance backlog, and minimizes its reaccumulation. To facilitate the further reduction of support costs, we recommend that the Secretary of Energy require that each DOE management and operating contractor implement a process improvement program that routinely assesses the efficiency and effectiveness of business practices and other operations. We provided DOE with a draft of this report for its review and comment. In written comments, DOE generally concurred with our recommendations. (See app. II.) DOE also provided a number of technical comments, which we incorporated in this report as appropriate. To examine indirect cost-rate trends at each of the five largest DOE laboratories, we analyzed each laboratory’s indirect cost-rate data for fiscal years 2000 through 2004. Specifically, we interviewed laboratory financial officials and analyzed documents to determine how each laboratory calculated its overall indirect costs rate by (1) classifying costs as direct or indirect and (2) collecting indirect costs into pools and distributing them among other cost pools or directly to program sponsors. Furthermore, because DOE and its management and operating contractors use functional support cost data to help assess certain activities, we analyzed these rates for the five laboratories for fiscal years 2000 through 2004, along with the results of peer reviews performed at each laboratory. We surveyed laboratory financial officials on the reliability of the indirect and functional support cost data, covering issues such as data entry access, quality control procedures, and the accuracy and completeness of these data. Follow-up questions were added whenever necessary. In addition, we reviewed all data provided by the laboratories, investigated instances where we had questions regarding issues such as categories or amounts, and made corrections as needed. On the basis of this work, we determined that the financial data provided were sufficiently reliable for the purposes of our report. We presented percentage changes in the overall indirect cost rates for fiscal years 2000 through 2004 that the five laboratories reported. However, because of limitations discussed in this report, analyses of increases or decreases in these rates mean little without a careful analysis of how a contractor classifies costs, and one contractor’s rates cannot be compared with those of others. Moreover, we analyzed changes in classification of costs or changes in mission over time to determine if these data were comparable over several years at a single laboratory. We noted in our report when we found changes in classification that affected the comparability of data at a single location over time. To assess the efforts of DOE and its laboratory contractors to reduce support costs and identify additional opportunities for savings, we visited each of the five laboratories to interview senior managers and obtain supporting documentation, interviewed DOE officials, and examined prior GAO and DOE Inspector General reports. In the course of this work, we identified opportunities for further potential cost savings. We then interviewed cognizant laboratory and DOE officials about actions taken to address each opportunity and reviewed supporting documentation. Specifically, to review DOE’s contract provisions, we interviewed DOE contracting officers and obtained information about recent contractual provisions that emphasized potential cost savings or improved efficiency. To address DOE’s liabilities related to employee benefits, we analyzed benefit value studies for each of the laboratories and compared the results of our analysis with the department’s requirements. We also interviewed DOE, NNSA, and contractor officials to clarify the results of the studies and to provide us with documentation on proposed resolutions, as appropriate. It was not our intent to verify, nor would we have been able to independently verify, the accuracy of actuarial calculations, assumptions, or data used in the comparison studies due to the proprietary nature of benefits that consulting firm databases used to conduct the studies. To address the maintenance backlog, we analyzed data from DOE’s Office of Engineering and Construction Management, which collects information on deferred maintenance. We also analyzed each laboratory’s 10-year plan and related documents and compared the results of our analysis with NNSA’s and the department’s requirements. We spoke with cognizant officials at DOE headquarters and its site offices and with laboratory managers to verify the results of our analysis and to determine the actions being taken to address the backlog. To examine the laboratories’ use of process improvement programs, we reviewed examples that the three laboratories provided of improved effectiveness and of reduced costs for their business operations and interviewed senior managers at the other two laboratories regarding what they had done to improve business operations. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Energy; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about the report, please contact me at (202) 512-3841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Department of Energy (DOE) introduced award-term incentives as a pilot program to Sandia in fiscal year 2004, and then expanded the use of the incentives to the Lawrence Berkeley contract and to the final request for proposals for Los Alamos and the Nevada Test Site, as shown in table 3. The language in the final request for proposals for Los Alamos, if placed in the contract, would award additional years to the contract term if the contractor (1) achieves a certain level of performance to be determined by the DOE contracting officer and (2) meets other conditions, such as finding cost savings and using these savings to adequately perform approved work. As shown in table 3, Los Alamos can earn 13 additional years on its 7-year contract. Key differences between the Los Alamos final request for proposals and the Sandia contract are as follows: The level of performance required to receive additional contract years can be determined by the contracting officer for Los Alamos, but Sandia’s contract requires outstanding performance. NNSA officials stated that the recent history of performance at Los Alamos may take several years to reverse, and the contractor may not be able to achieve overall “outstanding” ratings in the initial years of the contract. The officials also noted that the flexibility may allow the contracting officer to focus on certain areas for improvement, such as business operations. The contract term for Los Alamos could be 20 years, rather than the 10 years allotted to Sandia. The 20-year term for Los Alamos requires a deviation from DOE Acquisition Regulation 970.1706-1, which limits DOE contracts to 10 years. In addition to the contact named above, Richard Cheston, Beverly Peterson, Robert Sanchez, James Espinoza, and Cynthia Norris made key contributions to this report. Also contributing to this report were Chuck Bausell, Virginia Chanley, Nancy Crothers, Alison O’Neill, and Omari Norman.
In fiscal year 2004, about two-thirds of the Department of Energy's (DOE) $26.9 billion in spending went to 28 major facilities--laboratories, production and test facilities, and nuclear waste cleanup and storage facilities. DOE spent about $2.9 billion in fiscal year 2004 to support the mission of its five largest laboratories. GAO was asked to examine (1) recent trends in indirect and functional support cost rates for these five laboratories, noting key differences in how contractors classify costs, and (2) the efforts of DOE and its contractors to reduce indirect and other support costs and identify additional opportunities for savings. For fiscal years 2000 through 2004, laboratory-reported rates for indirect costs--those not charged directly to a specific program--increased at two laboratories and decreased at three. However, indirect cost rates cannot be compared across laboratories because contractors classify different portions of support costs as indirect. To facilitate analysis, DOE requires the laboratories to report what it called "functional support costs," or costs that support missions, regardless of whether they are classified as direct or indirect costs. Using this measure, three laboratories' rates--that is, functional support costs divided by total costs--increased and two laboratories' rates decreased over the 5-year period. While functional support cost rates improved comparability, several DOE and contractor officials said that the definitions for some categories of support costs, such as "facilities management," are unclear, leading to confusion and inconsistent reporting. DOE and its contractors have initiated several steps to reduce indirect and other support costs but can take additional actions to improve their implementation. First, DOE's laboratory contracts have increasingly included incentives to encourage cost reductions. In fiscal year 2004, for example, the National Nuclear Security Administration began an "award-term" pilot program that allows a contractor to earn extra contract years based on performance and cost-saving achievements. However, DOE is expanding use of this incentive without evaluating it. Second, DOE requires its contractors to benchmark employee benefits and to reduce benefits if they exceed the benchmark, but DOE did not promptly enforce these requirements at one laboratory and exempted two others. Third, DOE has begun to address a $1.9 billion backlog of deferred maintenance to reduce long-term costs. However, without a more rigorous approach, the backlog will persist well into future decades. Lastly, while some laboratories have used process improvement programs to streamline business processes and reduce costs, others do not have such programs, nor are they required to have them.
No single hotline or database captures the universe of identity theft victims. Some individuals do not even know that they have been victimized until months after the fact, and some known victims may not know to report or may choose not to report to the police, credit bureaus, or established hotlines. Thus, it is difficult to fully or accurately measure the prevalence of identity theft. Some of the often-quoted estimates of prevalence range from one-quarter to three-quarters of a million victims annually. Generally speaking, the higher the estimate of identity theft prevalence, the greater the (1) number of victims who are assumed not to report the crime and (2) number of hotline callers who are assumed to be victims rather than “preventative” callers. However, we found no information to confirm the extent to which these assumptions are valid. Nevertheless, although it is difficult to specifically or comprehensively quantify identity theft, a number of data sources can be used as proxies or indicators for gauging the prevalence of such crime. These sources include the three national consumer reporting agencies that have call-in centers for reporting identity fraud or theft; the Federal Trade Commission (FTC), which maintains a database of complaints concerning identity theft; the SSA/OIG, which operates a hotline to receive allegations of SSN misuse and program fraud; and federal law enforcement agencies—Department of Justice components, Department of the Treasury components, and the Postal Inspection Service—responsible for investigating and prosecuting identity theft- related cases. Each of these various sources or measures seems to indicate that the prevalence of identity theft is growing. According to the three national consumer reporting agencies, the most reliable indicator of the incidence of identity theft is the number of long- term (generally 7 years) fraud alerts placed on consumer credit files. Fraud alerts constitute a warning that someone may be using the consumer’s personal information to fraudulently obtain credit. Thus, a purpose of the alert is to advise credit grantors to conduct additional identity verification or contact the consumer directly before granting credit. One of the three consumer reporting agencies estimated that its 7- year fraud alerts involving identity theft increased 36 percent over 2 recent years—from about 65,600 in 1999 to 89,000 in 2000. A second agency reported that its 7-year fraud alerts increased about 53 percent in recent comparative 12-month periods; that is, the number increased from 19,347 during one 12-month period (July 1999 through June 2000) to 29,593 during the more recent period (July 2000 through June 2001). The third agency reported about 92,000 fraud alerts for 2000 but was unable to provide information for any earlier year. The federal Identity Theft Act (P.L. 105-318) required the FTC to “log and acknowledge the receipt of complaints by individuals who certify that they have a reasonable belief” that one or more of their means of identification have been assumed, stolen, or otherwise unlawfully acquired. In response to this requirement, on November 1, 1999, FTC established a toll-free telephone hotline (1-877-ID-THEFT) for consumers to report identity theft. Information from complainants is accumulated in a central database (the Identity Theft Data Clearinghouse) for use as an aid in law enforcement and prevention of identity theft. From its establishment in November 1999 through September 2001, FTC’s Identity Theft Data Clearinghouse received a total of 94,100 complaints from victims, including 16,784 complaints transferred to the FTC from the SSA/OIG. In the first month of operation, the Clearinghouse answered an average of 445 calls per week. By March 2001, the average number of calls answered had increased to over 2,000 per week. In December 2001, the weekly average was about 3,000 answered calls. However, FTC staff noted that identity theft-related statistics may, in part, reflect enhanced consumer awareness and reporting. SSA/OIG operates a fraud hotline to receive allegations of fraud, waste, and abuse. In recent years, SSA/OIG has reported a substantial increase in calls related to identity theft. For example, allegations involving SSN misuse increased more than fivefold, from about 11,000 in fiscal year 1998 to about 65,000 in fiscal year 2001. A review performed by SSA/OIG of a sample of 400 allegations of SSN misuse indicate that up to 81 percent of all allegations of SSN misuse related directly to identity theft. “… our office has investigated numerous cases where individuals apply for benefits under erroneous SSNs. Additionally, we have uncovered situations where individuals counterfeit SSN cards for sale on America’s streets. From time to time, we have even encountered SSA employees who sell legitimate SSNs for hundreds of dollars. Finally, we have seen examples where SSA’s vulnerabilities in its enumeration business process [i.e., the process for issuing SSNs] adds to the pool of SSNs available for criminal fictitious identities.” Although federal law enforcement agencies do not have information systems that specifically track identity theft cases, the agencies provided us with statistics for identity theft-related crimes. Regarding bank fraud, for instance, the FBI reported that its arrests increased from 579 in 1998 to 645 in 2000—and was even higher (691) in 1999. The Secret Service reported that, for recent years, it has redirected its identity theft-related efforts to focus on high-dollar, community-impact cases. Thus, even though the total number of identity theft-related cases closed by the Secret Service decreased from 8,498 in fiscal year 1998 to 7,071 in 2000, the amount of fraud losses prevented in these cases increased from a reported average of about $73,000 in 1998 to an average of about $218,000 in 2000. The Postal Inspection Service, in its fiscal year 2000 annual report, noted that identity theft is a growing trend and that the agency’s investigations of such crime had “increased by 67 percent since last year.” “The availability of information on the Internet, in combination with the advances in computer hardware and software, makes it easier for the criminal to assume the identity of another for the purposes of committing fraud. For example, there are web-sites that offer novelty identification cards (including the hologram). After downloading the format, fonts, art work, and hologram images, the information can be easily modified to resemble a state- issued driver’s license. In addition to drivers’ licenses, there are web-sites that offer birth certificates, law enforcement credentials (including the FBI), and Internal Revenue Service forms.” Similarly, the SSA/OIG has noted that, “The ever-increasing number of identity theft incidents has exploded as the Internet has offered new and easier ways for individuals to obtain false identification documents, including Social Security cards.” Aliens and others have used identity theft or other forms of identity fraud to create fraudulent documents that might enable individuals to enter the country and seek job opportunities. With nearly 200 countries using unique passports, official stamps, seals, and visas, the potential for immigration document fraud is great. In addition, more than 8,000 state or local offices issue birth certificates, driver’s licenses, and other documents aliens can use to establish residency or identity. This further increases the number of documents that can be fraudulently used by aliens to gain entry into the United States, obtain asylum or relief from deportation, or receive such other immigration benefits as work permits or permanent residency status. Reportedly, large-scale counterfeiting has made employment eligibility documents widely available. For example, in May 1998, INS seized more than 24,000 counterfeit Social Security cards in Los Angeles after undercover agents purchased 10,000 counterfeit INS permanent resident cards from a counterfeit document ring. Generally, when a person attempts to enter the United States at a port of entry, INS inspectors require the individual to show one of several documents that would prove identity and/or authorize entry. These documents include border crossing cards, alien registration cards, nonimmigrant visas, U.S. passports or other citizenship documents, foreign passports or citizenship documents, reentry permits, refugee travel documents, and immigrant visas. At ports of entry, INS inspectors annually intercept tens of thousands of fraudulent documents presented by aliens attempting to enter the United States. As table 1 shows, INS inspectors intercepted over 100,000 fraudulent documents annually in fiscal years 1999 through 2001. Generally, about one-half of all the intercepted documents were border crossing cards and alien registration cards. The availability of jobs is one of the primary magnets attracting illegal aliens to the United States. Immigration experts believe that as long as opportunities for employment exist, the incentive to enter the United States illegally will persist and efforts at the U.S. borders to prevent illegal entry will be undermined. The Immigration Reform and Control Act (IRCA) of 1986 made it illegal for employers to knowingly hire unauthorized aliens. IRCA requires employers to comply with an employment verification process intended to provide employers with a means to avoid hiring unauthorized aliens. The process requires newly hired employees to present documentation establishing their identity and eligibility to work. From a list of 27 acceptable documents, employees have the choice of presenting 1 document establishing both identity and eligibility to work (e.g., an INS permanent resident card) or 1 document establishing identity (e.g., a driver’s license) and 1 establishing eligibility to work (e.g., a Social Security card). Generally, employers cannot require the employees to present a specific document. Employers are to review the document or documents that an employee presents and complete an Employment Eligibility Form, INS Form I-9. On the form, employers are to certify that they have reviewed the documents and that the documents appear genuine and relate to the individual. Employers are expected to judge whether the documents are obviously fraudulent. INS is responsible for checking employer compliance with IRCA’s verification requirements. Significant numbers of aliens unauthorized to work in the United States have used fraudulent documents to circumvent the employment verification process designed to prevent employers from hiring them. For example, INS data showed that about 50,000 unauthorized aliens were found to have used 78,000 fraudulent documents to obtain employment over the 20-month period from October 1996 through May 1998. About 60 percent of the fraudulent documents used were INS documents; 36 percent were Social Security cards, and 4 percent were other documents, such as driver’s licenses. Also, we noted that counterfeit employment eligibility documents were widely available. For instance, in November 1998 in Los Angeles, INS seized nearly 2 million counterfeit documents, such as INS permanent resident cards and Social Security cards, which were headed for distribution points around the country. Aliens have also attempted to use fraudulent documents or other illegal means to obtain other immigration benefits, such as naturalization or permanent residency. Document fraud encompasses the counterfeiting, sale, or use of false documents, such as birth certificates, passports, or visas, to circumvent U.S. immigration laws and may be part of some benefit application fraud cases. Such fraud threatens the integrity of the legal immigration system. Although INS has not quantified the extent of immigration benefit fraud, agency officials told us that the problem was pervasive and would increase. In one case, for example, an immigration consulting business filed 22,000 applications for aliens to qualify under a legalization program. Nearly 5,500 of the aliens’ claims were fraudulent and 4,400 were suspected of being fraudulent. In another example, according to an INS Miami District Office official, during the month of January 2001 its investigative unit received 205 leads, of which 84 were facilitator cases (e.g., cases involving individuals or entities who prepare fraudulent benefit applications or who arrange marriages for a fee for the purpose of fraudulently enabling an alien to remain in the United States). In both of these examples, fraudulent documents played a role in the attempts to obtain immigration benefits. “In addition to the credit card and financial fraud crimes often committed, identity theft is a major facilitator of international terrorism. Terrorists have used stolen identities in connection with planned terrorist attacks. An Algerian national facing U.S. charges of identity theft, for example, allegedly stole the identities of 21 members of a health club in Cambridge, Massachusetts, and transferred the identities to one of the individuals convicted in the failed 1999 plot to bomb the Los Angeles International Airport.” The events of September 11, 2001, have increased the urgency of being able to effectively authenticate the identity of individuals. In addition to using identity theft or identity fraud to enter the United States illegally and seek job opportunities, some aliens have used fraudulent documents in connection with serious crimes, such as narcotics trafficking and terrorism. For instance, according to INS, although most aliens are smuggled into the United States to pursue employment opportunities, some are smuggled as part of a criminal or terrorist enterprise. INS believes that its increased enforcement efforts along the southwest border have prompted greater reliance on alien smugglers and that alien smuggling is becoming more sophisticated, complex, organized, and flexible. In a fiscal year 2000 threat assessment, INS predicted that fraud in obtaining immigration benefits would continue to rise as the volume of petitions for benefits grows and as smugglers search for other methods to introduce illegal aliens into the United States. Also, INS believes organized crime groups will increasingly use smugglers to facilitate illegal entry of individuals into the United States to engage in criminal activities. Alien smugglers are expected to increasingly use fraudulent documents to introduce aliens into the United States. “One of the conspirators in the World Trade Center bombing entered the country on a photo-substituted Swedish passport in September 1992. The suspect used a Swedish passport ‘expecting to pass unchallenged through the INS inspection area at New York’s Kennedy Airport—since an individual bearing a valid Swedish passport does not even need a visa to enter the United States.’ When the terrorist arrived at John F. Kennedy International Airport (JFK), an INS inspector suspected that the passport had been altered. A search of his luggage revealed instructional materials for making bombs; the subject was detained and sentenced to six months’ imprisonment for passport fraud. In March 1994 he was convicted for his role in the World Trade Center bombing and sentenced to 240 years in prison and a $500,000 fine.” Furthermore, regarding this terrorist incident, a United States Sentencing Commission report noted that, “The World Trade Center defendant used, and was in possession of, numerous false identification documents, such as photographs, bank documents, medical histories, and education records from which numerous false identities could have been created.” “Terrorist financing methods range from the highly sophisticated to the most basic. There is virtually no financing method that has not at some level been utilized by terrorists and terrorist groups. Traditionally, their efforts have been aided considerably by the use of correspondent bank accounts, private banking accounts, offshore shell banks, … bulk cash smuggling, identity theft, credit card fraud, and other criminal operations such as illegal drug trafficking. (Emphasis added.) “There often is a nexus between terrorism and organized crime, including drug trafficking. … Both groups make use of fraudulent documents, including passports and other identification and customs documents to smuggle goods and weapons.” “Under the direction of the U.S. Department of Justice (DOJ), investigators subpoenaed records for all 9,000 airport employees with security badges to identify instances of SSN misuse. They identified 61 individuals with the highest-level security badges and 125 individuals with lower level badges who misused SSN’s. A Federal grand jury indicted 69 individuals for Social Security and INS violations. Sixty-one of the 69 individuals arrested had an SSN misuse charge by the U.S. Attorney. On December 11, 2001, SSA’s OIG agents and other members of the Operation Safe Travel Task Force arrested 50 individuals. To date, more than 20 have been sentenced after pleading guilty to violations cited in the indictments. Many are now involved in deportation proceedings. There were other similar airport operations after the Salt Lake City Operation, and more are underway.” In the May 2002 report, the SSA Inspector General noted that identity theft begins, in most cases, with the misuse of an SSN. In this regard, the Inspector General emphasized the importance of protecting the integrity of the SSN, especially given that this “de facto” national identifier is the “key to social, legal, and financial assimilation in this country” and is a “link in our homeland security goal.” In its 1999 study of identity theft, the United States Sentencing Commission reported that SSNs and driver’s licenses are the identification means most frequently used to generate or “breed” other fraudulent identifiers. Also, in early 1999, following passage of the federal Identity Theft Act, the U.S. Attorney General’s Council on White Collar Crime established the Subcommittee on Identity Theft to foster coordination of investigative and prosecutorial strategies. Subcommittee leadership is vested in the Fraud Section of the Department of Justice’s Criminal Division, and membership includes various federal law enforcement and regulatory agencies, as well state and local law enforcement representation. The subcommittee chairman told us that, since the terrorist incidents of September 11, 2001, the subcommittee has begun to focus more on prevention. For example, the chairman noted that the American Association of Motor Vehicle Administrators attended a recent subcommittee meeting to discuss ways to protect against counterfeit or fake driver’s licenses. The May 2002 SSA/OIG report, cited previously, stated that, “while the ability to punish identity theft is important, the ability to prevent it is even more critical.” In this regard, the Inspector General noted that effective protections to prevent SSN misuse must be put in place at three stages— before issuance of the SSN, during the life of the number holder, and upon that individual’s death. Other prevention efforts designed to enhance technologies in support of identification and verification functions include the following: The Enhanced Border Security and Visa Entry Reform Act of 2002 (P.L. 107-173), signed by the President on May 14, 2002, requires that all travel and entry documents (including visas) issued by the United States to aliens be machine-readable and tamper-resistant and include standard biometric identifiers by October 26, 2004. Also, the act requires the Attorney General to install machine readers and scanners at all U.S. ports of entry by this date so as to allow biometric comparison and authentication of all U.S. travel and entry documents and of all passports issued by visa waiver countries.
Identity theft involves "stealing" another person's personal identifying information, such as their Social Security number, date of birth, or mother's maiden name, and using that information to fraudulently establish credit, run up debt, or take over existing financial accounts. Another pervasive category is the use of fraudulent identity documents by aliens to enter the United States illegally to obtain employment and other benefits. The prevalence of identity theft appears to be growing. Moreover, identity theft is not typically a stand-alone crime; rather identity theft is usually a component of one or more white-collar or financial crimes. According to Immigration and Naturalization Service (INS) officials, the use of fraudulent documents by aliens is extensive, with INS inspectors intercepting tens of thousands of fraudulent documents at ports of entry in each of the last few years. These documents were presented by aliens attempting to enter the United States to seek employment or obtain naturalization or permanent residency status. Federal investigations have shown that some aliens use fraudulent documents in connection with more serious illegal activities, such as narcotics trafficking and terrorism. Efforts to combat identity fraud in its many forms likely will command continued attention for policymakers and law enforcement to include investigating and prosecuting perpetrators, as well as focusing on prevention measures to make key identification documents and information less susceptible to being counterfeited or otherwise used fraudulently.
CBP’s SBI program is responsible for identifying and deploying an appropriate mix of technology, known as SBInet (e.g., sensors, cameras, radars, communications systems, and mounted laptop computers for agent vehicles); tactical infrastructure (e.g., pedestrian and vehicle fencing, roads, and lighting); and personnel (e.g., program staff and Border Patrol agents) that are intended to enable CBP agents and officers to gain effective control of U.S. borders. SBInet technology is also intended to include the development and deployment of a common operating picture (COP) that provides uniform data through a command center environment to Border Patrol agents in the field and all DHS agencies and to be interoperable with stakeholders external to DHS, such as local law enforcement. The current focus of SBI is on the southwest border areas between the ports of entry that CBP has designated as having the highest need for enhanced border security because of serious vulnerabilities. The SBI program office and its offices of tactical infrastructure and SBInet are responsible for overall program implementation and oversight. Figure 1 is a map of the southwest border and the Border Patrol sectors. In September 2006, CBP awarded a prime contract to the Boeing Company for 3 years, with three additional 1-year options. As the prime contractor, Boeing is responsible for acquiring, deploying, and sustaining selected SBI technology and tactical infrastructure projects. In this way, Boeing has extensive involvement in the SBI program requirements development, design, production, integration, testing, and maintenance and support of SBI projects. Moreover, Boeing is responsible for selecting and managing a team of subcontractors that provide individual components for Boeing to integrate into the SBInet system. The SBInet contract is largely performance-based—that is, CBP has set requirements for the project and Boeing and CBP coordinate and collaborate to develop solutions to meet these requirements—and designed to maximize the use of commercial off- the-shelf technology. CBP’s SBI program office oversees the Boeing-led SBI contractor team. CBP is executing part of SBI’s activities through a series of task orders to Boeing for individual projects. As of February 15, 2008, CBP had awarded eight task orders to Boeing. Table 1 is a summary of the task orders awarded to Boeing for SBI projects. In addition to deploying technology across the southwest border, the SBI program office plans to deploy 370 miles of single-layer pedestrian fencing and 300 miles of vehicle fencing by December 31, 2008. Pedestrian fencing is designed to prevent people on foot from crossing the border and vehicle fencing is physical barriers meant to stop the entry of vehicles. The SBI program office, through the tactical infrastructure program, is using the U.S. Army Corps of Engineers (USACE) to contract for fencing and supporting infrastructure (such as lights and roads), complete required environmental assessments, and acquire necessary real estate. In addition, in January 2008, CBP issued Boeing a supply and supply chain management task order for the purchase of construction items, such as steel. In December 2006, DHS estimated that the total cost for completing the deployment along the southwest border will be $7.6 billion from fiscal years 2007 through 2011. DHS has not yet reported the estimated life cycle cost for the SBI program, which is the total cost to the government for a program over its full life, consisting of research and development, operations, maintenance, and disposal costs. Since fiscal year 2007, Congress has appropriated about $2.5 billion for SBI. DHS has requested an additional $775 million for SBI for fiscal year 2009. DHS announced its final acceptance of Project 28 from Boeing on February 22, 2008, completing its first efforts at implementing SBInet, and is now gathering lessons learned from the project that it plans to use for future technology development. The scope of the project, as described in the task order between Boeing and DHS, was to provide a system with the detection, identification, and classification capabilities required to control the border, at a minimum, along 28 miles within the Tucson sector. To do so, Boeing was to provide, among other things, mobile towers equipped with radar, cameras, and other features, a COP that communicates comprehensive situational awareness, and secure-mounted laptop computers retrofitted in vehicles to provide agents in the field with COP information. As we previously reported, Boeing delivered and deployed the individual technology components of Project 28—such as the towers, cameras and radars—on schedule. See figures 2 and 3 below for photographs of SBInet technology along the southwest border. However, Boeing’s inability to integrate these components with the COP software delayed the implementation of Project 28 over 5 months after the planned June 13, 2007, milestone when Border Patrol agents were to begin using Project 28 technology to support their activities. Specifically, SBI program office officials said that the software that Boeing selected for the COP was intended to be used as a law enforcement dispatch system and was not designed to process and distribute the type of information being collected by the cameras, radars, and sensors. However, SBI officials told us that Boeing selected the system based on initial conversations with Border Patrol officials, but when deployed to the field, Boeing found limitations with the system. As we reported in October 2007, among other technical problems reported were that it was taking too long for radar information to display in command centers and newly deployed radars were being activated by rain or other environmental factors, making the system unusable. According to officials from the SBI program office, Boeing worked to correct these problems from July through November 2007. As one example of improvement, Border Patrol officials reported that Boeing added an auto focus mechanism on the cameras located on the nine towers. However, SBInet and Border Patrol identified issues that remain unresolved. For example, the Border Patrol reported that as of February 2008 problems remained with the resolution of the camera image at distances over 5 kilometers, while expectations were that the cameras would work at about twice that distance. From June 26 through November 19, 2007, Boeing submitted three corrective action plans, documents that defined Boeing’s technical approach for correcting the problems associated with Project 28 and the steps that needed to occur for DHS to conditionally accept the system. As we reported in October, DHS officially notified Boeing in August 2007 that it would not accept Project 28 until certain problems were corrected. DHS conditionally accepted Project 28 on December 7, 2007, but included a requirement for Boeing to analyze the quality of the project’s video signals, radar data, and the timing of all components by January 11, 2008. Upon conditional acceptance, the government began operating Project 28, and SBI program office and Border Patrol officials told us that plans were under way to conduct additional testing of the system capabilities— including operational testing, which is used to determine that the system performs in the environment in which it is to operate. This testing was not scheduled to take place until after final acceptance of Project 28. According to SBI program office and Border Patrol officials, the results of this testing will not be used to make changes to Project 28, but will instead be used to guide future SBInet development. In addition, DHS announced its final acceptance of Project 28 on February 22, 2008 noting that Boeing met its contractual requirements. However, according to SBI program officials, the outcomes of future SBInet development will define the equipment that will replace most of Project 28 system components. Both SBI program office and Border Patrol officials stated that although Project 28 did not fully meet their expectations, they are gathering lessons learned and are ready to move forward with developing SBInet technologies that will better meet their needs. Table 2 summarizes key events for Project 28. The SBI program office reported that it is moving forward with SBInet development beyond Project 28; however, it has revised its approach and timeline for doing so. As noted earlier in this statement, in addition to the $20.6 million task order awarded for Project 28, Boeing has also received other task orders as part of its overall contract with CBP. For example, in August 2007 DHS awarded a $69 million task order to Boeing to design the technical, engineering, and management services it would perform to plan and deploy SBInet system components within the Border Patrol’s Tucson, Yuma, and El Paso sectors. In addition, the SBI program office reported that on December 7, 2007, DHS awarded a 14-month task order worth approximately $64.5 million to Boeing to design, develop, and test, among other things, an upgraded COP software system for CBP command centers and agent vehicles, known as COP version 0.5. According to the SBI program office, planned SBInet development, such as the work being conducted by Boeing under these task orders, will eventually replace and improve upon Project 28. These officials stated that in light of the difficulties that DHS encountered during Boeing’s deployment of Project 28, the Secretary requested and CBP has proposed a revised strategy that is more deliberative. As two SBInet program managers put it, they want to develop SBInet “right, not fast”. We reported in October 2007 that SBI program office officials expected to complete all of the first phase of technology projects by the end of calendar year 2008. However, in February 2008, the SBI program office estimated that the first planned deployment of technology—including components linked to the updated COP—will occur in two geographic areas within the Tucson sector by the end of calendar year 2008, with the remainder of the deployments to the Tucson, Yuma, and El Paso sectors completed by the end of calendar year 2011. Officials from the SBI program office said that the Project 28 location is one of the two areas where the planned first deployments will occur. An official from the SBI program office noted that this schedule reflects DHS’s revised approach to developing SBInet technology and that meeting this timeline depends, in part, on the availability of funding. At this time, the SBI program office is still in the process of defining life cycle costs for SBInet development. SBI program office and Border Patrol officials told us they have learned lessons during the development of Project 28 that will influence future SBInet development, including the technology that is planned to be deployed along the southwest border. For example, testing to ensure the components—such as radar and cameras—were integrated correctly before being deployed to the field at the Tucson sector did not occur given the constraints of the original 8-month timeline of the firm-fixed-price task order with Boeing, according to officials from the SBI program office. As a result, incompatibilities between individual components were not discovered in time to be corrected by the planned Project 28 deployment deadline. To address this issue moving forward with SBInet development, Boeing has established a network of laboratories to test how well the integration of the system works, and according to the SBI program office, deployment will not occur until the technology meets specific performance specifications. Another lesson learned involved how the Project 28 system requirements were developed by Boeing. SBI program office and Border Patrol officials told us that the requirements for how the Project 28 system was to operate were designed and developed by Boeing with minimal input from the intended operators of the system, including Border Patrol agents. Instead, Boeing based the requirements for how Project 28 was to be designed and developed on information in the contract task order. The lack of user involvement resulted in a system that does not fully address or satisfy user needs. In February 2008, SBI program officials reported that Project 28 was designed to be a demonstration project, rather than a fully operating system, and there was not enough time built into the contract to obtain feedback from all of the intended users of the system during its design and development. While Border Patrol agents in the Tucson sector agreed with Boeing’s conceptual design of Project 28, they said the final system might have been more useful if they and others had been given an opportunity to provide feedback throughout the process. For example, Border Patrol agents told us they would have found the laptops mounted into agent vehicles safer and easier to use if they were larger and manipulated by a touch screen rather than with a pencil-shaped stylus, as using a stylus to manipulate the screen while driving is impractical. In addition, the laptops were not mounted securely enough to prevent significant rattling when driving on rough terrain, making the laptops difficult to use and prone to needing repair. While user feedback was limited for Project 28, SBI program office officials have recognized the need to involve the intended operators when defining requirements and have efforts underway to do so for future SBInet development. For example, officials from the Border Patrol, CBP Air and Marine, and the CBP Office of Field Operations reported that representatives from their offices were involved in the development of requirements for SBInet technology as early as October 2006 and on an ongoing basis since then. Specifically, SBI program officials stated that Border Patrol users participated in requirements workshops with Boeing held in October 2006 at CBP headquarters and then at various field locations from December 2006 through June 2007, from which the SBInet operational requirements were derived (a process separate from Project 28). According to the SBI program office, users from other CBP offices such as the Office of Field Operations and Air and Marine have been involved in meetings as the SBI program office updates these requirements in preparation for the next development efforts. Additionally, SBI program officials stated that Boeing held meetings in January and February 2008 specifically designed to integrate user input to the development of the COP version 0.5. Since DHS conditionally accepted the task order from Boeing on December 7, 2007, those Border Patrol agents in the Tucson sector that have received updated training on Project 28 have been using the technologies as they conduct their border security activities. Border Patrol agents reported that they would have liked to have been involved sooner with the design and development of Project 28, since they are the ones who operate the system. Border Patrol officials stated that it is not an optimal system. Border Patrol agents from the Tucson sector provided examples of Project 28 capabilities that do not adequately support Border Patrol operations because of their design. As noted earlier in this statement, Border Patrol agents have had difficulties using the laptops mounted into agent vehicles to provide them with COP information. However, according to Border Patrol agents, Project 28 has provided them with improved capabilities over their previous equipment, which included items such as cameras and unattended ground sensors that were only linked to nearby Border Patrol units, not into a centralized command and control center. In addition, Border Patrol officials we spoke with at the Tucson sector noted that Project 28 has helped its agents become more familiar with the types of technological capabilities they are integrating into their operations now and in the future. As we reported in October 2007, the Border Patrol’s Tucson sector was developing a plan to integrate SBInet into its operating procedures. However, in February 2008 a senior official from the Border Patrol’s Tucson sector told us that the plan is still in draft form because of the delays in the deployment of Project 28. In October 2007 we reported that the 22 trainers and 333 operators who were initially trained on the Project 28 system were to be retrained with revised curriculum because of deployment delays and changes to the COP software. As of January 2008, 312 Border Patrol operators and 18 trainers had been retrained on Project 28. According to Border Patrol agents we spoke with at the Tucson sector, a group of Border Patrol agents provided significant input into the revisions that the Boeing subcontractor made to the Project 28 training curriculum. Officials from the SBInet Training Division and Border Patrol agents reported that originally there were plans to train 728 Border Patrol operators located in the Project 28 area by January 2008. However, now no additional training will be conducted on Project 28, as they are expecting that future SBInet development will eventually replace Project 28. For example, according to the SBInet Training Division, the COP version 0.5 currently under development by Boeing will replace the Project 28 COP, and this will require new training. Deployment of tactical infrastructure projects along the southwest border is on schedule, but meeting the SBI program office’s goal to have 370 miles of pedestrian fence and 300 miles of vehicle fencing in place by December 31, 2008, will be challenging and total costs are not yet known. As of February 21, 2008, the SBI program office reported that it had constructed 168 miles of pedestrian fence and 135 miles of vehicle fence (see table 3). According to SBI program office officials, the deployment of tactical infrastructure projects is on schedule, but these officials reported that keeping on schedule will be challenging because of various factors, including difficulties in acquiring rights to border lands. Unlike prior fencing projects that were primarily located on federal land, approximately 54 percent of planned projects are scheduled to be constructed on private property. We previously reported that as of July 2007, CBP anticipated community resistance to deployment for 130 of its 370 miles of pedestrian fencing miles. CBP officials told us that, of 480 owners of private property along the relevant segments of the border, all but 148 gave CBP access to survey their land prior to December 2007. In December, CBP, working in conjunction with the Department of Justice (DOJ), sent letters to most of the 148 remaining land owners reiterating the request for access and notifying them of the government's intent to pursue court-ordered access if necessary. As of February 16, 2008, approximately 50 percent of the land owners who received these letters had given CBP access to their land to do surveys. In some cases where access has not been granted, DOJ has begun the legal process known as “eminent domain” to obtain court-ordered access to the property. SBI program office officials state that they are working to acquire rights to border lands; however, until the land access issues are resolved, this factor will continue to pose a risk to meeting the deployment targets. SBI program office officials are unable to estimate the total cost of pedestrian and vehicle fencing because they do not yet know the type of terrain where the fencing is to be constructed, the materials to be used, or the cost to acquire the land. In addition, in October 2007, we reported that to minimize one of the many factors that add to the cost, CBP has previously drawn upon its Border Patrol agents and Department of Defense military personnel to assist in such efforts. However, SBI program office officials reported that they plan to use more costly commercial labor for future infrastructure projects to meet their deadlines. In February 2008, SBI program office officials told us that they estimate construction costs for pedestrian fencing will be about $4 million per mile and vehicle fencing costs will be about $2 million per mile. However, total costs will be higher because this estimate does not include other expenses, such as contract management, contract incentives to meet an expedited schedule, higher-than-expected property acquisition costs, and unforeseen costs associated with working in remote areas. As the SBI program office moves forward with tactical infrastructure construction, it is making modifications based on lessons learned from previous fencing efforts. For example, for future fencing projects, the SBI program office plans to buy construction items, such as steel, in bulk; use approved fence designs; and contract out the maintenance and repair of the tactical infrastructure. SBI program office officials estimate that buying essential items in bulk will make fencing deployment more economical and will reduce the likelihood of shortages and delays of critical equipment. SBI program office officials also believe that using pre- approved and tested fence designs (see fig. 4) will expedite preconstruction planning and will allow for more efficient maintenance and repair. In addition, the SBI program office plans to award a contract to maintain and service all initial, current, and future tactical infrastructure deployed through SBI because it believes that it will be more efficient than relying on Border Patrol agents and military personnel who also have other duties. The SBI program office established a staffing goal of 470 employees for fiscal year 2008, made progress toward meeting this goal and published a human capital plan in December 2007; however, the SBI program office is in the early stages of implementing this plan. As of February 1, 2008, the SBI program office reported having 142 government staff and 163 contractor support staff for a total of 305 employees, up from 247 staff on September 30, 2007. In addition, SBI program office officials reported that they had selected an additional 39 staff that the program office is in the process of bringing onboard. These officials also told us that they believe they will be able to meet their staffing goal by the end of September 2008 and will have 261 government staff and 209 contractor support staff on board (see table 4). In addition, according to SBI program office officials, they would like to bring the ratio of government employees to contractor staff closer to 1:1 because their office has determined that that ratio provides the right mix of personnel with the skills necessary to ensure appropriate government oversight. The targeted ratio, based on the staffing goal for fiscal year 2008, would result in a better than 1:1 ratio of government to contract support staff. In December 2007, the SBI program office published the first version of its Strategic Human Capital Management Plan and is now in the early implementation phase. As we have previously reported, a strategic human capital plan is a key component used to define the critical skills and competencies that will be needed to achieve programmatic goals and outline ways an organization can fill gaps in knowledge, skills, and abilities. The SBI program office’s plan outlines seven main goals for the office and includes planned activities to accomplish those goals, which align with federal government best practices. However, the activities are in the early stages of implementation. We have previously reported that a properly designed and implemented human capital program can contribute to achieving an agency’s mission and strategic goals. Until the SBI program office fully implements its plan, it will lack a baseline and metrics by which to judge the program. Table 5 summarizes the seven human capital goals, the SBI program office’s planned activities and steps taken to accomplish these activities, as of February 20, 2008. Securing the nation’s borders is a daunting task. Project 28, an early technology project, resulted in a product that did not fully meet user needs and the project’s design will not be used as the basis for future SBInet development. To ensure that future SBInet development efforts deliver an operational capability that meets user needs and delivers technology that can be used in additional projects, it is important that the lessons learned on Project 28 continue to be applied and that user input continues to be sought so that future technology projects are successful. In the tactical infrastructure area, although fencing projects are currently on schedule, meeting future deadlines will be challenging because of various factors, including difficulties in acquiring rights to border land. Furthermore, future tactical infrastructure costs are not yet known because issues regarding land acquisition have not been resolved and other decisions, such as the materials to be used, have not been made. These issues underscore Congress’ need to stay closely attuned to DHS’s progress in the SBI program to make sure that performance, schedule, and cost estimates are achieved and the nation’s border security needs are fully addressed. This concludes my prepared testimony. I would be happy to respond to any questions that members of the subcommittees may have. For questions regarding this testimony, please call Richard M. Stana at (202) 512-8777 or stanar@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this statement were Susan Quinlan, Assistant Director; Deborah Davis, Assistant Director; Jeanette Espínola; Karen Febey; Michael Parr; Jamelyn Payan; David Perkins; Jeremy Rothgerber; and Leslie Sarapu. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2005, the Department of Homeland Security (DHS) established the Secure Border Initiative (SBI), a multiyear, multibillion-dollar program to secure U.S. borders. One element of SBI is the U.S. Customs and Border Protection's (CBP) SBI program, which is responsible for developing a comprehensive border protection system through a mix of security infrastructure (e.g., fencing) and surveillance and communication technologies (e.g., radars, sensors, cameras, and satellite phones). GAO was asked to monitor DHS progress in implementing CBP's SBI program. This testimony provides GAO's observations on (1) technology implementation; (2) the extent to which Border Patrol agents have been trained and are using SBI technology; (3) infrastructure implementation; and (4) how the CBP SBI program office has defined its human capital goals and the progress it has made to achieve these goals. GAO's observations are based on analysis of DHS documentation, such as program schedules, contracts, status, and reports. GAO also conducted interviews with DHS officials and contractors, and visits to sites in the southwest border where SBI deployment is under way. GAO performed the work from November 2007 through February 2008. DHS generally agreed with GAO's findings. On February 22, 2008, DHS announced final acceptance of Project 28, a $20.6 million project to secure 28 miles along the southwest border, and is now gathering lessons learned to use in future technology development. The scope of the project, as described in the task order DHS issued to Boeing--the prime contractor DHS selected to acquire, deploy, and sustain systems of technology across the U.S. borders--was to provide a system with the capabilities required to control 28 miles of border in Arizona. CBP officials responsible for the program said that although Project 28 will not be replicated, they have learned lessons from their experience that they plan to integrate into future technology development. CBP has extended its timeline and approach for future projects and does not expect all of the first phase of its next technology project to be completed before the end of calendar year 2011. Border Patrol agents began using Project 28 technologies in December 2007, and as of January 2008, 312 agents in the area had received updated training. According to Border Patrol agents, while Project 28 is not an optimal system to support their operations, it has provided greater technological capabilities than did their previous equipment. Not all of the Border Patrol agents in the Tucson sector have been trained on Project 28 because the system will be replaced with newer technologies. Deployment of fencing along the southwest border is on schedule, but meeting CBP's goal to have 370 miles of pedestrian fence and 300 miles of vehicle fence in place by December 31, 2008, will be challenging and total costs are not yet known. As of February 21, 2008, the SBI program office reported that it had constructed 168 miles of pedestrian fence and 135 miles of vehicle fence. CBP officials reported that meeting deadlines has been difficult because of various factors including difficulties in acquiring rights to border lands. Moreover, CBP officials are unable to estimate the total cost of pedestrian and vehicle fencing because they do not yet know the type of terrain where the fencing is to be constructed, the materials to be used, and the cost to acquire the land. As CBP moves forward with construction, it is making modifications based on lessons learned from previous efforts. For example, CBP plans to buy construction items, such as steel, in bulk; use approved fence designs; and contract out the maintenance and repair. CBP's SBI program office established a staffing goal of 470 employees for fiscal year 2008, made progress toward meeting this goal and published its human capital plan in December 2007; however, it is in the early stages of implementing the plan. As of February 1, 2008, the office reported having a total of 305 employees. SBI program officials said that they believe they will be able to meet their staffing goal of 470 staff by the end of the fiscal year. In December 2007, the SBI office published the first version of its Strategic Human Capital Management Plan and is now in the early implementation phase. The plan outlines seven main goals for the office and activities to accomplish those goals, which align with federal government best practices.
FAA has made progress in several areas to improve its implementation of NextGen. FAA has set performance goals for NextGen through 2018, including goals to improve the throughput of air traffic at key airports by 12 percent over 2009 levels, reduce delays by 27 percent from 2009 levels, and achieve a 5 percent reduction in average taxi-time at key airports. The setting of NextGen performance goals is a positive step, but much work remains in identifying measurable and reasonable performance metrics and targets for specific NextGen activities. FAA has undertaken a number of NextGen initiatives to improve system efficiency. For example, FAA has begun work to streamline its procedure approval processes—including its environmental reviews of new procedures—and has expanded its capacity to develop new performance- based navigation routes and procedures. In 2010, FAA produced over 200 performance-based navigation routes and procedures, exceeding its goal of 112. FAA reports thousands of gallons of fuel savings from the performance-based navigation routes in operation at Atlanta and the continuous descents being used into Los Angeles and San Francisco. However, aircraft operators have complained that FAA has not produced the most useful or beneficial routes and procedures to date. To address these concerns, FAA has undertaken thorough reviews in a number of areas. FAA has completed initial work to identify improvements needed in the airspace in Washington, D.C.; North Texas; Charlotte, North Carolina; Northern California; and Houston, Texas—focusing on routes and procedures that will produce benefits for operators. While the specific benefits from this work are not yet fully known, FAA expects to achieve measurable reductions in miles flown, fuel burn, and emissions from these actions. In addition, airport surface management capabilities—such as shared surface surveillance data and new techniques to manage the movement of aircraft on the ground—installed in Boston and New York have saved thousands of gallons of fuel and thousands of hours of taxi- out time, according to FAA. With respect to the continuing implementation of NextGen systems and capabilities, our ongoing work has preliminarily found that some key NextGen-related programs are generally proceeding on time and on budget (see table 1). Some key acquisitions may soon encounter delays, which can increase overall acquisition costs, as well as costs to maintain current systems. For example, delays in implementing the ERAM program is projected to increase costs by $330 million, as well as an estimated $7 to $10 million per month in additional costs to continue maintaining the system that ERAM was meant to replace. Moreover, due to the integrated nature of NextGen, many of its component systems are mutually dependent on one or more other systems. For example, ERAM is critical to the delivery of ADS-B because ADS-B requires the use of some ERAM functions. ERAM is also pivotal to the on-time implementation of two other key NextGen acquisitions—Data Communications and SWIM. In part due to ERAM’s delay, FAA pushed the Data Communications program’s start date from September 2011 to February 2012, plans to revise the original SWIM- segment 1 cost and schedule plan, and delayed the SWIM-segment 2 start date from 2010 to December 2012. The long-term result of this decision is not yet known but it could delay certain SWIM capabilities and hinder the progress of other capabilities that depend, in turn, on the system integration that SWIM is intended to provide. Thus, looking more broadly, the implementation of NextGen—both in the midterm (through 2018) and in the long term (beyond 2018)—will be affected by how well FAA manages program interdependencies. Delays in program implementation, as described above, and budget constraints have also affected FAA’s capital budget planning. The Administration has proposed reducing FAA’s capital budget by a total of $2.8 billion, or 20 percent, for fiscal years 2012 through 2015 largely due to governmentwide budget constraints. Most of this proposed reduction is on NextGen and NextGen-related spending, as reflected in FAA’s revised 5-year Capital Investment Plan for fiscal years 2012 through 2016. Congress has not completed FAA’s appropriation for fiscal year 2012, but current House and Senate appropriation bills propose to fund the agency near or above 2011 levels. FAA will have to balance its priorities to ensure that NextGen implementation stays on course while also sustaining the current infrastructure—which is needed to prevent failures and maintain the reliability and efficiency of current operations. To maintain credibility with aircraft operators that NextGen will be implemented, FAA must deliver systems and capabilities on time so that operators have incentives to invest in the avionics that will enable NextGen to operate as planned. As we have previously reported, a past FAA program’s cancellation contributed to skepticism about FAA’s commitment to follow through with its plans. That industry skepticism, which we have found lingers today, could delay the time when significant NextGen benefits—such as increased capacity and more direct, fuel- saving routing—are realized. A number of NextGen benefits depend upon having a critical mass of properly equipped aircraft. Reaching that critical mass is a significant challenge because the first aircraft operators to equip will not obtain a return on their investment until many other operators also equip. Stakeholders have proposed various equipage incentives. For example, one such proposal is for a private equity fund, backed by federal guarantees, to provide loans or other financial assistance to operators to help them equip, with payback of the loans dependent on FAA meeting its schedule commitments to implement capabilities that will produce benefits for operators. In addition, the NextGen Advisory Committee has begun to identify the specific avionics requirements for particular NextGen capabilities through the midterm, as well as identifying who—in terms of which parts of the fleet operating in which regions—should be targeted for additional incentives to equip. Our past and ongoing work examining aspects of NextGen have highlighted several other challenges facing FAA in achieving timely and successful implementation. For this statement, we would like to highlight a few specific areas: the potential effect of program delays on international harmonization efforts, the need for FAA to ensure that it addresses human factors and workforce training issues to successfully transition to a new air transportation system, the need for FAA to continue to address potential environmental impacts, and the need for FAA to improve the management and governance of NextGen. Effect of delays on FAA’s ability to collaborate with Europe. Delays to NextGen programs, and potential reductions in the budget for NextGen activities, could delay the schedule for harmonization with Europe’s air traffic management modernization efforts and the realization of these benefits. FAA officials indicated that the need to address funding reductions takes precedence over previously agreed upon schedules, including those previously coordinated with Europe. For example, FAA officials responsible for navigation systems told us that FAA is restructuring plans for its ground-based augmentation system (GBAS) because of potential funding reductions. While final investment decisions concerning GBAS have yet to be made, these officials said that FAA might have to stop its work on GBAS while Europe continues its GBAS development, with the result that Europe may have an operational GBAS, while FAA does not. A delay in implementing GBAS would require FAA to continue using the current instrument landing system which does not provide the benefits of GBAS, according to these officials. Such a situation could again fuel stakeholder skepticism about whether FAA will follow through with its commitment to implementing NextGen, and in turn, increase airlines’ hesitancy to equip with NextGen technologies. Need to address human factors and training issues. Under NextGen, pilots and air traffic controllers will rely to a greater extent on automation, which will change their roles and responsibilities in ways that will necessitate an understanding of the human factors issues involved and require that training be provided on the new automated systems. FAA and the National Aeronautics and Space Administration (NASA)—the primary agencies responsible for integrating human factors issues into NextGen—must ensure that human factors issues are addressed so that controllers, pilots, and others will operate NextGen components in a safe and efficient manner. Failure to do so could delay implementation of NextGen. We recently reported that FAA has not fully integrated human factors into the development of some aviation systems. For example, we noted that controllers involved in the initial operations capabilities tests of ERAM at an air traffic control center in Salt Lake City found using the system cumbersome, confusing, and difficult to navigate, thus indicating that FAA did not adequately involve controllers who operate the system in the system’s early development. In response to our recommendations in that report, FAA has created a cross-agency coordination plan in cooperation with NASA that establishes focus areas for human factors research, inventories existing facilities for research, and capitalizes on past and current research of all NextGen issues. In addition to integrating human factors research into NextGen systems, FAA and NASA will have to identify and develop the training necessary to address controllers’ and pilots’ changing roles, and have this training in place before NextGen is fully realized (when some aircraft will be equipped with NextGen systems and others will not). Need to address environmental impacts of NextGen. Another challenge to implementing NextGen is expediting environmental reviews and developing strategies to address the environmental impacts of NextGen. As we stated in our recent report on environmental impacts at airports, with the changes in aircraft flight paths that will accompany NextGen efforts, some communities that were previously unaffected or minimally affected by aircraft noise will be exposed to increased noise levels. These levels could trigger the need for environmental reviews, as well as raise community concerns. Our report found that addressing environmental impacts can delay the implementation of operational changes, and indicated that a systematic approach to addressing these impacts and the resulting community concerns may help reduce such delays. To its credit, FAA has been working to develop procedures for streamlining environmental review processes that affect NextGen activities. Need to improve management and governance. FAA has embarked on an initiative to restructure a number of organizations within the agency. We have previously reported on problems with FAA’s management and oversight of NextGen acquisitions and implementation. Specifically, FAA plans to abolish and merge a number of committees to improve decision making and reduce time requirements of senior FAA executives. It also plans to make the NextGen organization the responsibility of the Deputy Administrator and to create a new head of program management for NextGen- related programs to ensure improved oversight of NextGen implementation. Further, the Air Traffic Organization will be divided into two branches: operations and NextGen program management. Operations will focus on the day-to-day management of the national air space and the program management branch will be responsible for developing and implementing programs while working with operations to ensure proper integration. While elimination of duplicative committees and focus on accountability for NextGen implementation is a positive step, it remains to be seen whether this latest reorganization will produce the desired results. Chairman Petri, Ranking Member Costello, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D. at (202) 512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Paul Aussendorf, Maria Edelstein, Heather Krause, Ed Laughlin, and Andrew Von Ah (Assistant Directors); Colin Fallon, Bert Japikse, Ed Menoche, and Dominic Nadarski. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the current progress toward implementing the Next Generation Air Transportation System (NextGen). NextGen will impact nearly every aspect of air transportation and will transform the way in which the air transportation system operates today. It will do so, in part, by (1) using satellite-based surveillance as opposed to ground-based radars, (2) using performance-based navigation instead of cumbersome step-by-step procedures, (3) replacing routine voice communications with data transmissions, and (4) organizing and merging the disjointed data that pilots, controllers, airports, airlines, and others currently rely on to operate the system. The Federal Aviation Administration (FAA) has been planning and developing NextGen since 2003, and is now implementing near-term (through 2012) and mid-term (through 2018) capabilities. Over the years, concerns have been raised by the Congress and other stakeholders that despite years of effort and billions of dollars spent, FAA has not made sufficient progress in deploying systems and producing benefits. In past reports, we have made a number of recommendations to FAA to address delays in development and acquisitions, improve its processes, and focus on accountability and performance. Others have also made recommendations to FAA to improve its implementation of NextGen. For example, the Department of Transportation's Office of the Inspector General recently made recommendations regarding specific NextGen programs, and the NextGen Midterm Implementation Task Force--whose creation was requested by FAA--resulted in consensus recommendations from industry on specific capabilities FAA should prioritize. Over the last 2 years, FAA has taken several steps and instituted many changes to address several of these issues. This statement today discusses (1) the results of NextGen programs and improvements to date and (2) ongoing issues that will affect NextGen implementation. This statement today is based on our NextGen-related reports and testimonies over the last 2 years; ongoing work for this subcommittee that includes our analysis of selected NextGen acquisitions and our analysis of FAA's efforts to harmonize NextGen with air traffic control modernization efforts in Europe; our review of FAA's 2025 Strategic Plan, 2011 NextGen Implementation Plan, 2012 Budget Submission, and other documents; and selected program updates from FAA officials. In summary, FAA has improved its efforts to implement NextGen and is continuing its work to address critical issues that we, stakeholders, and others have identified over the years. In some areas, FAA has implemented NextGen capabilities that have demonstrated measurable benefits for system users, such as fuel savings. FAA has also made progress in streamlining its processes, improving its capacity to develop new flight procedures, and focusing its efforts on specific procedures that are needed in key metropolitan areas. Furthermore, we found that several NextGen-related acquisitions are generally on time and on budget. However, some acquisitions have been delayed, which has impacted the timelines of other dependent systems, and the potential exists for other acquisitions to also encounter delays. These delays have resulted in increased costs and reduced benefits. Going forward, FAA must focus on delivering systems and capabilities in a timely fashion to maintain its credibility with industry stakeholders, whose adoption of key technologies is crucial to NextGen's success. FAA must also continue to monitor how delays will affect international harmonization issues, focus on human factors issues, streamline environmental approvals, mitigate environmental impacts, and focus on improving management and governance.
The DHS Privacy Office was established with the appointment of the first Chief Privacy Officer in April 2003. The Chief Privacy Officer is appointed by the Secretary and reports directly to him. Under departmental guidance, the Chief Privacy Officer is to work closely with other departmental components, such as the General Counsel’s Office and the Policy Office, to address privacy issues. The Chief Privacy Officer also serves as the designated senior agency official for privacy, as has been required by the Office of Management and Budget (OMB) of all major departments and agencies since 2005. The positioning of the Privacy Office within DHS differs from the approach used for privacy offices in other countries, such as Canada and the European Union, where privacy offices are independent entities with investigatory powers. Canada’s Privacy Commissioner, for example, reports to the Canadian House of Commons and Senate and has the power to summon witnesses and subpoena documents. In contrast, the DHS privacy officer position was established by the Homeland Security Act as an internal component of DHS. As a part of the DHS organizational structure, the Chief Privacy Officer has the ability to serve as a consultant on privacy issues to other departmental entities that may not have adequate expertise on privacy issues. The office is divided into two major functional groups addressing Freedom of Information Act (FOIA) and privacy responsibilities, respectively. Within each functional group, major responsibilities are divided among senior staff assigned to oversee key areas, including international privacy policy, departmental disclosure and FOIA, privacy technology, and privacy compliance operations. There are also component-level and program-level privacy officers at the Transportation Security Administration (TSA), U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program, and U.S. Citizenship and Immigration Services. Figure 1 details the structure of the DHS Privacy Office. When the Privacy Office was initially established, it had 5 full-time employees, including the Chief Privacy Officer. Since then, the staff has expanded to 16 full-time employees. The Privacy Office has also hired private contractors to assist in meeting its obligations. As of February 2007, the Privacy Office had 9 full-time and 3 half-time contractor staff in addition to its full-time employees. The first Chief Privacy Officer served from April 2003 to September 2005, followed by an Acting Chief Privacy Officer who served through July 2006. In July 2006, the Secretary appointed a second permanent chief privacy officer. In 2004, the Chief Privacy Officer established the DHS Data Privacy and Integrity Advisory Committee, which is to advise the Secretary and the Chief Privacy Officer on “programmatic, policy, operational, administrative, and technological issues within DHS” that affect individual privacy, data integrity, and data interoperability. The Advisory Committee is composed of privacy professionals from the private sector and academia and is organized into three subcommittees; Data Integrity and Information Protection, Privacy Architecture, and Data Acquisition and Use. To date, the Advisory Committee has issued reports on several privacy issues, such as use of commercial data and radio frequency identification (RFID) technology, and has made related policy recommendations to the department. The Advisory Committee’s charter requires that the committee meet at least once a year; however, thus far it has met quarterly. The Advisory Committee meetings, which are open to the public, are used to discuss progress on planned reports, to identify new issues, to receive briefings from DHS officials, and to hold panel discussions on privacy issues. The Privacy Office is responsible for ensuring that DHS is in compliance with federal laws that govern the use of personal information by the federal government. Among these laws are the Homeland Security Act of 2002 (as amended by the Intelligence Reform and Terrorism Prevention Act of 2004), the Privacy Act of 1974, and the E-Gov Act of 2002. Based on these laws, the Privacy Office’s major responsibilities can be summarized into four broad categories: (1) reviewing and approving PIAs, (2) integrating privacy considerations into DHS decision making, (3) reviewing and approving public notices required by the Privacy Act, and (4) preparing and issuing reports. Section 208 of the E-Gov Act is designed to enhance protection of personally identifiable information in government information systems and information collections by requiring that agencies conduct PIAs. According to OMB guidance, a PIA is an analysis of how information is handled: (1) to ensure that handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) to determine the risks and effects of collecting, maintaining, and disseminating personally identifiable information in an electronic information system; and (3) to examine and evaluate protections and alternative processes for handling information to mitigate potential risks to privacy. Agencies must conduct PIAs before they (1) develop or procure information technology that collects, maintains, or disseminates personally identifiable information or (2) initiate any new data collections of personal information that will be collected, maintained, or disseminated using information technology—if the same questions are asked of 10 or more people. To the extent that PIAs are made publicly available, they provide explanations to the public about such things as what information will be collected, why it is being collected, how it is to be used, and how the system and data will be maintained and protected. Further, a PIA can serve as a tool to guide system development decisions that have a privacy impact. The Privacy Office is responsible for ensuring departmental compliance with the privacy provisions of the E-Gov Act. Specifically, the chief privacy officer must ensure compliance with the E-Government Act requirement that agencies perform PIAs. In addition, the Homeland Security Act requires the chief privacy officer to conduct a PIA for proposed rules of the department on the privacy of personal information. The Privacy Office’s involvement in the PIA process also serves to address the Homeland Security Act requirement that the chief privacy officer assure that technology sustains and does not erode privacy protections. Integrating privacy considerations into the DHS decision-making process Several of the Privacy Office’s statutory responsibilities involve ensuring that the major decisions and operations of the department do not have an adverse impact on privacy. Specifically, the Homeland Security Act requires that the Privacy Office assure that the use of technologies by the department sustains, and does not erode, privacy protections relating to the use, collection, and disclosure of personal information. The act further requires that the Privacy Office evaluate legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government. It also requires the office to coordinate with the DHS Officer for Civil Rights and Civil Liberties on those issues. Reviewing and approving public notices required by the Privacy Act The Privacy Office is required by the Homeland Security Act to assure that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the Privacy Act of 1974. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personally identifiable information that is maintained in their systems of records. The act defines a record as any item, collection, or grouping of information about an individual that is maintained by an agency and contains that individual’s name or other personal identifier, such as a Social Security number. It defines “system-of- records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires agencies to notify the public, via a notice in the Federal Register, when they create or modify a system-of- records. This notice is known as a system-of-records notice and must include information such as the type of information collected, the types of individuals about whom information is collected, the intended “routine” uses of the information, and procedures that individuals can use to review and correct their personal information. The act also requires agencies to define—and limit themselves to—specific purposes for collecting the information. The Fair Information Practices, which form the basis of the Privacy Act, are a set of principles for protecting the privacy and security of personal information that were first proposed in 1973 by a U.S. government advisory committee. These principles were intended to address what the committee considered the poor level of protection then being afforded to privacy under contemporary law. Since that time, the Fair Information Practices have been widely adopted as a benchmark for evaluating the adequacy of privacy protections. Appendix II contains a summary of the Fair Information Practices. The Homeland Security Act requires the Privacy Office to prepare annual reports to Congress detailing the department’s activities affecting privacy, including complaints of privacy violations and implementation of the Privacy Act of 1974. In addition to the reporting requirements under the Homeland Security Act, Congress has occasionally directed the Privacy Office to report on specific technologies and programs. For example, in the conference report for the DHS appropriations act for fiscal year 2005, Congress directed the Privacy Office to report on DHS’s use of data mining technologies. Congress asked for a follow-up report on data mining in the conference report for the fiscal year 2007 DHS appropriations bill. The Intelligence Reform and Terrorism Prevention Act of 2004 also required the Chief Privacy Officer to submit a report to Congress on the privacy and civil liberties impact of the DHS-maintained Automatic Selectee and No-Fly lists, which contain names of potential airline passengers who are to be selected for secondary screening or not allowed to board aircraft. In addition, the Privacy Office can initiate its own investigations and produce reports under its Homeland Security Act authority to report on complaints of privacy violations and assure technologies sustain and do not erode privacy protections. The DHS Privacy Office has made significant progress in addressing its statutory responsibilities under the Homeland Security Act by developing processes to ensure implementation of privacy protections in departmental programs. For example, the Privacy Office has established processes for ensuring departmental compliance with the PIA requirement in the E-Gov Act of 2002. Instituting this framework has led to increased attention to privacy requirements on the part of departmental components, contributing to an increase in the quality and number of PIAs issued. The Privacy Office has addressed its mandate to assure that technologies sustain, and do not erode, privacy protections through a variety of actions, including implementing its PIA compliance framework, establishing a federal advisory committee, raising awareness of privacy issues through a series of public workshops, and participating in policy development for several major DHS initiatives. The office has also taken action to address its mandate to evaluate regulatory and legislative proposals involving the use of personal information by the federal government and has coordinated with the DHS Officer for Civil Rights and Civil Liberties. While substantial progress has been made in these areas, limited progress has been made in other important aspects of privacy protection. For example, while the Privacy Office has reviewed, approved, and issued 56 new and revised Privacy Act system-of-records notices since its establishment, little progress has been made in updating notices for legacy systems of records—older systems of records that were originally developed by other agencies prior to the creation of DHS. Because many of these notices are not up-to-date, the department is not in compliance with OMB requirements that system-of-records notices be reviewed biennially, nor can it be assured that the privacy implications of its many systems that process and maintain personal information have been fully and accurately disclosed to the public. Further, the Privacy Office has generally not been timely in issuing public reports, potentially limiting their value and impact. The Homeland Security Act requires that the Privacy Office report annually to Congress on department activities that affect privacy, including complaints of privacy violations. However, the office has issued only two annual reports within the 3-year period since it was established in April 2003, and one of these did not include complaints of privacy violations as required. In addition, required reports on several specific topics have also been late. In addition, the office initiated its own investigations of specific programs and produced reports on these reviews, several of which were not publicly released until long after concerns had been addressed. Late issuance of reports has a number of negative consequences beyond failure to comply with mandated deadlines, including a potential reduction in the reports’ value and erosion of the office’s credibility. One of the Privacy Office’s primary responsibilities is to review and approve DHS PIAs, thus ensuring departmental compliance with the privacy provisions (section 208) of the E-Gov Act of 2002. The E-Gov Act requires that federal agencies perform PIAs before developing or procuring technology that collects, maintains, or disseminates personally identifiable information, or when initiating a new collection of personally identifiable information using information technology. In addition, the Homeland Security Act also specifically directs the office to perform PIAs for proposed departmental rules concerning the privacy of personal information. Further, the Privacy Office’s involvement in the PIA process also addresses its broad Homeland Security Act requirement to “assure that technology sustains and does not erode privacy protections.” The centerpiece of the Privacy Office’s compliance framework is its written guidance on when a PIA must be conducted, how the associated analysis should be performed, and how the final document should be written. Although based on OMB’s guidance, the Privacy Office’s guidance goes further in several areas. For example, the guidance does not exempt national security systems and also clarifies that systems in the pilot testing phase are not exempt. The DHS guidance also provides more detailed instructions than OMB’s guidance on the level of detail to be provided. For example, the DHS guidance requires a discussion of a system’s data retention period, procedures for allowing individual access, redress, correction of information, and technologies used in the system, such as biometrics or RFID. The Privacy Office has taken steps to continually improve its PIA guidance. Initially released in February 2004, the guidance has been updated twice, in July 2005 and March 2006. These updates have increased the emphasis on describing the privacy analysis that should take place in making system design decisions that affect privacy. For example, regarding data retention, the latest guidance requires program officials to explain why the personal information in question is needed for the specified retention period. Based on feedback from DHS components, the Privacy Office plans to update the guidance again in 2007 to clarify questions on data mining and the use of commercial data. To accompany its written guidance, the Privacy Office has also developed a PIA template and a number of training sessions to further assist DHS personnel. In addition to written guidance and training, the office developed a procedure, called a privacy threshold analysis (PTA), for identifying which DHS systems contain personally identifiable information and which require PIAs. The privacy threshold analysis is a brief assessment that requires system owners to answer six basic questions on the nature of their systems and whether the systems contain personally identifiable information. System owners complete the privacy threshold analysis document and submit it to the Privacy Office for an official determination as to whether a PIA is required. As of January 2006, all DHS systems have been required to perform privacy threshold analyses. Our analysis of published DHS PIAs shows significant quality improvements in those completed recently compared with those from 2 or 3 years ago. Overall, there is a greater emphasis on analysis of system development decisions that impact privacy, because the guidance now requires that such analysis be performed and described. For example, the most recent PIAs include separate privacy impact analyses for several major topics, including planned uses of the system and information, plans for data retention, and the extent to which the information is to be shared outside of DHS. This contrasts to the earliest PIAs published by the Privacy Office, which do not include any of these analyses. The emphasis on analysis should allow the public to more easily understand a system and its impact on privacy. Further, our analysis found that use of the template has resulted in a more standardized structure, format, and content, making the PIAs more easily understandable to the general reader. In addition to its positive impact on DHS, the Privacy Office’s framework has been recognized by others outside of DHS. For example, the Department of Justice has adopted the DHS Privacy Office’s guidance and template with only minor modifications. Further, privacy advocacy groups have commended the Privacy Office for developing the guidance and associated training sessions, citing this as one of the office’s most significant achievements. Figure 2 illustrates the steps in the development process as established by the Privacy Office’s guidance. In addition to written guidance, the Privacy Office has also taken steps to integrate PIA development into the department’s established operational processes. For example, the Privacy Office coordinated with the Office of the Chief Information Officer (OCIO) to include the privacy threshold analysis requirement as part of OCIO’s effort to compile an inventory of major information systems required by the Federal Information Security Management Act. Through this coordination, the Privacy Office was able to get the PTA requirement incorporated into the software application that DHS uses to track agency compliance with the Federal Information Security Management Act. The Privacy Office also coordinated with OCIO to include submission of a privacy threshold analysis as a requirement within the larger certification and accreditation process. The process requires IT system owners to evaluate security controls to ensure that security risks have been properly identified and mitigated. The actions they have taken are then scored, and systems must receive a certain minimum score in order to be certified and accredited. The inclusion of the PTA as part of the systems inventory and in the certification and accreditation process has enabled the Privacy Office to better identify systems containing personally identifiable information that may require a PIA. Further, the Privacy Office is using the OMB Exhibit 300 budget process as an additional opportunity to ensure that systems containing personal information are identified and that PIAs are conducted when needed. OMB requires agencies to submit an Exhibit 300 Capital Asset Plan and Business Case for their major information technology systems in order to receive funding. The Exhibit 300 template asks whether a system has a PIA and if it is publicly available. Because the Privacy Office gives final departmental approval for all such assessments, it is able to use the Exhibit 300 process to ensure the assessments are completed. According to Privacy Office officials, the threat of losing funds has helped to encourage components to conduct PIAs. Integration of the PIA requirement into these management processes is beneficial in that it provides an opportunity to address privacy considerations during systems development, as envisioned by OMB’s guidance. Because of concerns expressed by component officials that the Privacy Office’s review process took a long time and was difficult to understand, the office has made efforts to improve the process and make it more transparent to DHS components. Specifically, the office established a five- stage review process. Under this process, a PIA must satisfy all the requirements of a given stage before it can progress to the next one. The review process is intended to take 5 to 6 weeks, with each stage intended to take 1 week. Figure 3 illustrates the stages of the review process. Through efforts such as the compliance framework, the Privacy Office has steadily increased the number of PIAs it has approved and published each year. As shown in figure 4, PIA output by the Privacy Office has more than doubled since 2004. According to Privacy Office officials, the increase in output was aided by the development and implementation of the Privacy Office’s structured guidance and review process. In addition, Privacy Office officials stated that as DHS components gain more experience, the output should continue to increase. Because the Privacy Office has focused departmental attention on the development and review process and established a structured framework for identifying systems that need PIAs, the number of identified DHS systems requiring a PIA has increased dramatically. According to its annual Federal Information Security Management Act reports, DHS identified 46 systems as requiring a PIA in fiscal year 2005 and 143 systems in fiscal year 2006. Based on the privacy threshold analysis process, the Privacy Office estimates that 188 systems will require a PIA in fiscal year 2007. Considering that only 25 were published in fiscal year 2006, it will likely be very difficult for DHS to expeditiously develop and issue PIAs for all of these systems because developing and approving them can be a lengthy process. According to estimates by Privacy Office officials, it takes approximately six months to develop and approve a PIA, but the office is working to reduce this time. The Privacy Office is examining several potential changes to the development process that would allow it to process an increased number of PIAs. One such option is to allow DHS components to quickly amend preexisting PIAs. An amendment would only need to contain information on changes to the system and would allow for quicker development and review. The Privacy Office is also considering developing standardized PIAs for commonly-used types of systems or uses. For example, such an assessment may be developed for local area networks. Systems intended to collect or use information outside what is specified in the standardized PIA would need approval from the Privacy Office. Of potential help in dealing with an increasing PIA workload is the establishment of full-time privacy officers within key DHS components. Components with a designated privacy officer have generally produced more PIAs than components without privacy officers. Of the eleven DHS components that have published PIAs, only three have designated privacy officers. Yet these three components account for 57 percent of all published DHS PIAs. Designating privacy officers in key DHS components, such as Customs and Border Protection, the U.S. Coast Guard, Immigration and Customs Enforcement, and the Federal Emergency Management Agency, could help in drafting PIAs that could be processed by the Privacy Office more expeditiously. Components such as these have a high volume of programs that interface directly with the public. Although the Privacy Office has also recommended that such privacy officers be designated, the department has not yet appointed privacy officers in any of these components. Until DHS does so, the Privacy Office will likely continue to be challenged in ensuring that PIAs are prepared, reviewed, and approved in a timely fashion. The Privacy Office has also taken steps to integrate privacy considerations in the DHS decision-making process. These actions are intended to address a number of statutory requirements, including that the Privacy Office assure that the use of technologies sustain, and do not erode, privacy protections; that it evaluate legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government; and that it coordinate with the DHS Officer for Civil Rights and Civil Liberties. In 2004, the first Chief Privacy Officer established the DHS Data Privacy and Integrity Advisory Committee to advise her and the Secretary on issues within the department that affect individual privacy, as well as data integrity, interoperability, and other privacy-related issues. The committee has examined a variety of privacy issues, produced reports, and made recommendations. Most recently, in December 2006, the committee adopted two reports; one on the use of RFID for identity verification, and another on the use of commercial data. As previously mentioned, the Privacy Office plans to update its PIA guidance to include further instructions on the use of commercial data. According to Privacy Office officials, this update will be based, in part, on the advisory committee’s report on commercial data. Appendix III contains a full list of the advisory committee’s publications. In addition to its reports, which are publicly available, the committee meets quarterly in Washington, D.C., and in other parts of the country where DHS programs operate. These meetings are open to the public and transcripts of the meetings are posted on the Privacy Office’s Web site. DHS officials from major programs and initiatives involving the use of personal data such as US-VISIT, Secure Flight, and the Western Hemisphere Travel Initiative, have testified before the committee. Private sector officials have also testified on topics such as data integrity, identity authentication, and RFID. Because the committee is made up of experts from the private sector and the academic community, it brings an outside perspective to privacy issues through its reports and recommendations. In addition, because it was established as a federal advisory committee, its products and proceedings are publicly available and thus provide a public forum for the analysis of privacy issues that affect DHS operations. The Privacy Office has also taken steps to raise awareness of privacy issues by holding a series of public workshops. The first workshop, on the use of commercial data for homeland security, was held in September 2005. Panel participants consisted of representatives from academia, the private sector, and government. In April 2006, a second workshop addressed the concept of public notices and freedom of information frameworks. More recently, in June 2006, a workshop was held on the policy, legal, and operational frameworks for PIAs and privacy threshold analyses and included a tutorial for conducting PIAs. Hosting public workshops is beneficial in that it allows for communication between the Privacy Office and those who may be affected by DHS programs, including the privacy advocacy community and the general public. Another part of the Privacy Office’s efforts to carry out its Homeland Security Act requirements is its participation in departmental policy development for initiatives that have a potential impact on privacy. The Privacy Office has been involved in policy discussions related to several major DHS initiatives and, according to department officials, the office has provided input on several privacy-related decisions. The following are major initiatives in which the Privacy Office has participated. Passenger name record negotiations with the European Union United States law requires airlines operating flights to or from the United States to provide the Bureau of Customs and Border Protection (CBP) with certain passenger reservation information for purposes of combating terrorism and other serious criminal offenses. In May 2004, an international agreement on the processing of this information was signed by DHS and the European Union. Prior to the agreement, CBP established a set of terms for acquiring and protecting data on European Union citizens, referred to as the “Undertakings.” In September 2005, under the direction of the first Chief Privacy Officer, the Privacy Office issued a report on CBP’s compliance with the Undertakings in which it provided guidance on necessary compliance measures and also required certain remediation steps. For example, the Privacy Office required CBP to review and delete data outside the 34 data elements permitted by the agreement. According to the report, the deletion of these extraneous elements was completed in August 2005 and was verified by the Privacy Office. In October 2006, DHS and the European Union completed negotiations on a new interim agreement concerning the transfer and processing of passenger reservation information. The Director of International Privacy Policy within the Privacy Office participated in these negotiations along with others from DHS in the Policy Office, Office of General Counsel, and CBP. The Western Hemisphere Travel Initiative is a joint effort between DHS and the Department of State to implement new documentation requirements for certain U.S. citizens and nonimmigrant aliens entering the United States. DHS and State have proposed the creation of a special identification card that would serve as an alternative to a traditional passport for use by U.S. citizens who cross land borders or travel by sea between the United States, Canada, Mexico, the Caribbean, or Bermuda. The card is to use a technology called vicinity RFID to transmit information on travelers to CBP officers at land and sea ports of entry. Advocacy groups have raised concerns about the proposed use of vicinity RFID because of privacy and security risks due primarily to the ability to read information from these cards from distances of up to 20 feet. The Privacy Office was consulted on the choice of identification technology for the cards. According to the DHS Policy Office, Privacy Office input led to a decision not to store or transmit personally identifiable information on the RFID chip on the card. Instead, DHS is planning on transmitting a randomly generated identifier for individuals, which is to be used by DHS to retrieve information about the individual from a centralized database. REAL ID Act of 2005 Among other things, the REAL ID Act requires DHS, in consultation with the Department of Transportation and the states, to issue regulations that set minimum standards for state-issued REAL ID drivers’ licenses and identification cards to be accepted for official purposes after May 11, 2008. Advocacy groups have raised a number of privacy concerns about REAL ID, chiefly that it creates a de facto national ID that could be used in the future for privacy-infringing purposes and that it puts individuals at increased risk of identity theft. The DHS Policy Office reported that it included Privacy Office officials, as well as officials from the Office of Civil Rights and Civil Liberties, in developing its implementing rule for REAL ID. The Privacy Office’s participation in REAL ID also served to address its requirement to evaluate legislative and regulatory proposals concerning the collection, use, and disclosure of personal information by the federal government. According to its November 2006 annual report, the Privacy Office championed the need for privacy protections regarding the collection and use of the personal information that will be stored on the REAL ID drivers’ licenses. Further, the office reported that it funded a contract to examine the creation of a state federation to implement the information sharing required by the act in a privacy-sensitive manner. As we have previously reported, DHS has used personal information obtained from commercial data providers for immigration, fraud detection, and border screening programs but, like other agencies, does not have policies in place concerning its uses of these data. Accordingly, we recommended that DHS, as well as other agencies, develop such policies. In response to the concerns raised in our report and by privacy advocacy groups, Privacy Office officials said they were drafting a departmentwide policy on the use of commercial data. Once drafted by the Privacy Office, this policy is to undergo a departmental review process (including review by the Policy Office, General Counsel, and Office of the Secretary), followed by a review by OMB prior to adoption. These examples demonstrate specific involvement of the Privacy Office in major DHS initiatives. However, Privacy Office input is only one factor that DHS officials consider in formulating decisions about major programs, and Privacy Office participation does not guarantee that privacy concerns will be fully addressed. For example, our previous work has highlighted problems in implementing privacy protections in specific DHS programs, including Secure Flight and the ADVISE program. Nevertheless, the Privacy Office’s participation in policy decisions provides an opportunity for privacy concerns to be raised explicitly and considered in the development of DHS policies. The Privacy Office has also taken steps to address its mandate to coordinate with the DHS Officer for Civil Rights and Civil Liberties on programs, policies, and procedures that involve civil rights, civil liberties, and privacy considerations, and ensure that “Congress receives appropriate reports on such programs.” The DHS Officer for Civil Rights and Civil Liberties cited three specific instances where the offices have collaborated. First, as stated previously, both offices have participated in the working group involved in drafting the implementing regulations for REAL ID. Second, the two offices coordinated in preparing the Privacy Office’s report to Congress assessing the privacy and civil liberties impact of the No-Fly and Selectee lists used by DHS for passenger prescreening. Third, the two offices coordinated on providing input for the “One-Stop Redress” initiative, a joint initiative between the Department of State and DHS to implement a streamlined redress center for travelers who have concerns about their treatment in the screening process. The DHS Privacy Office is responsible for reviewing and approving DHS system-of-records notices to ensure that the department complies with the Privacy Act of 1974. Specifically, the Homeland Security Act requires the Privacy Office to “assur that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the Privacy Act of 1974.” The Privacy Act requires that federal agencies publish notices in the Federal Register on the establishment or revision of systems of records. These notices must describe the nature of a system-of-records and the information it maintains. Additionally, OMB has issued various guidance documents for implementing the Privacy Act. OMB Circular A-130, for example, outlines agency responsibilities for maintaining records on individuals and directs government agencies to conduct biennial reviews of each system-of- records notice to ensure that it accurately describes the system-of- records. The Privacy Office has taken steps to establish a departmental process for complying with the Privacy Act. It issued a management directive that outlines its own responsibilities as well as those of component-level officials. Under this policy, the Privacy Office is to act as the department’s representative for matters relating to the Privacy Act. The Privacy Office is to issue and revise, as needed, departmental regulations implementing the Privacy Act and approve all system-of-records notices before they are published in the Federal Register. DHS components are responsible for drafting system-of-records notices and submitting them to the Privacy Office for review and approval. The management directive was in addition to system-of-records notice guidance published by the Privacy Office in August 2005. The guidance discusses the requirements of the Privacy Act and provides instructions on how to prepare system-of-records notices by listing key elements and explaining how they must be addressed. The guidance also lists common routine uses and provides standard language that DHS components may incorporate into their notices. As of February 2007, the Privacy Office had approved and published 56 system-of-records notices, including updates and revisions as well as new documents. In establishing Privacy Act processes, the Privacy Office has also begun to integrate the system-of-records notice and PIA development processes. The Privacy Office now generally requires that system-of-records notices submitted to it for approval be accompanied by PIAs. This is not an absolute requirement, because the need to conduct PIAs, as stipulated by the E-Gov Act, is not based on the same concept of a “system-of-records” used by the Privacy Act. Nevertheless, the Privacy Office’s intention is to ensure that, when the requirements do coincide, a system’s PIA is aligned closely with the related system-of-records notice. However, the Privacy Office has not yet established a process for conducting a biennial review of system-of-records notices, as required by OMB. OMB Circular A-130 directs federal agencies to review their notices biennially to ensure that they accurately describe all systems of records. Where changes are needed, the agencies are to publish amended notices in the Federal Register. The establishment of DHS involved the consolidation of a number of preexisting agencies, thus, there are a substantial number of systems that are operating under preexisting, or “legacy,” system-of-records notices— 218, as of February 2007. These documents may not reflect changes that have occurred since they were prepared. For example, the system-of- records notice for the Treasury Enforcement and Communication System has not been updated to reflect changes in how personal information is used that has occurred since the system was taken over by DHS from the Department of the Treasury. The Privacy Office acknowledges that identifying, coordinating, and updating legacy system-of-records notices is the biggest challenge it faces in ensuring DHS compliance with the Privacy Act. Because it focused its initial efforts on PIAs and gave priority to DHS systems of records that were not covered by preexisting notices, the office did not give the same priority to performing a comprehensive review of existing notices. According to Privacy Office officials, the office is encouraging DHS components to update legacy system-of-records notices and is developing new guidance intended to be more closely integrated with its PIA guidance. However, no significant reduction has yet been made in the number of legacy system-of-records notices that need to be updated. By not reviewing notices biennially, the department is not in compliance with OMB direction. Further, by not keeping its notices up-to-date, DHS hinders the public’s ability to understand the nature of DHS systems-of- records notices and how their personal information is being used and protected. Inaccurate system-of-records notices may make it difficult for individuals to determine whether their information is being used in a way that is incompatible with the purpose for which it was originally collected. Section 222 of the Homeland Security Act requires that the Privacy Officer report annually to Congress on “activities of the department that affect privacy, including complaints of privacy violations, implementation of the Privacy Act of 1974, internal controls, and other matters.” The act does not prescribe a deadline for submission of these reports; however, the requirement to report “on an annual basis” suggests that each report should cover a 1-year time period and that subsequent annual reports should be provided to Congress 1 year after the previous report was submitted. Congress has also required that the Privacy Office report on specific departmental activities and programs, including data mining and passenger prescreening programs. In addition, the first Chief Privacy Officer initiated several investigations and prepared reports on them to address requirements to report on complaints of privacy violations and to assure that technologies sustain and do not erode privacy protections. In addition to satisfying mandates, the issuance of timely public reports helps in adhering to the fair information practices, which the Privacy Office has pledged to support. Public reports address openness—the principle that the public should be informed about privacy policies and practices and that individuals should have a ready means of learning about the use of personal information—and the accountability principle—that individuals controlling the collection or use of personal information should be accountable for taking steps to ensure implementation of the fair information principles. The Privacy Office has not been timely and in one case has been incomplete in addressing its requirement to report annually to Congress. The Privacy Office’s first annual report, issued in February 2005, covered 14 months from April 2003 through June 2004. A second annual report, for the next 12 months, was never issued. Instead, information about that period was combined with information about the next 12-month period, and a single report was issued in November 2006 covering the office’s activities from July 2004 through July 2006. While this report generally addressed the content specified by the Homeland Security Act, it did not include the required description of complaints of privacy violations. Other reports produced by the Privacy Office have not met mandated deadlines or have been issued long after privacy concerns had been addressed. For example, although Congress required a report on the privacy and civil liberties effects of the No-Fly and Automatic Selectee Lists by June 2005, the report was not issued until April 2006, nearly a year late. In addition, although required by December 2005, the Privacy Office’s report on DHS data mining activities was not provided to Congress until July 2006 and was not made available to the public on the Privacy Office Web site until November 2006. In addition, the first Chief Privacy Officer initiated four investigations of specific programs and produced reports on these reviews. Although two of the four reports were issued in a relatively timely fashion, the other two reports were issued long after privacy concerns had been raised and addressed. For example, a report on the Multi-state Anti-Terrorism Information Exchange (MATRIX) program, initiated in response to a complaint by the American Civil Liberties Union submitted in May 2004, was not issued until two and a half years later, long after the program had been terminated. As another example, although drafts of the recommendations contained in the Secure Flight report were shared with TSA staff as early as summer 2005, the report was not released until December 2006, nearly a year and a half later. Table 1 summarizes DHS Privacy Office reports issued to date, including both statutorily required as well as self-initiated reports. According to Privacy Office officials, there are a number of factors contributing to the delayed release of its reports, including time required to consult with affected DHS components as well as the departmental clearance process, which includes the Policy Office, the Office of General Counsel, and the Office of the Secretary. After that, drafts must be sent to OMB for further review. In addition, the Privacy Office did not establish schedules for completing these reports that took into account the time needed for coordination with components or departmental and OMB review. Regarding the omission of complaints of privacy violations in the latest annual report, Privacy Office officials noted that the report cites previous reports on Secure Flight and the MATRIX program, which were initiated in response to alleged privacy violations, and that during the time period in question there were no additional complaints of privacy violations. However, the report itself provides no specific statements about the status of privacy complaints; it does not state that there were no privacy complaints received. Late issuance of reports has a number of negative consequences beyond noncompliance with mandated deadlines. First, the value these reports are intended to provide is reduced when the information contained is no longer timely or relevant. In addition, since these reports serve as a critical window into the operations of the Privacy Office and on DHS programs that make use of personal information, not issuing them in a timely fashion diminishes the office’s credibility and can raise questions about the extent to which the office is receiving executive-level attention. For example, delays in releasing the most recent annual report led a number of privacy advocates to question whether the Privacy Office had adequate authority and executive-level support. Congress also voiced this concern in passing the Department of Homeland Security Appropriations Act of 2007, which states that none of the funds made available in the act may be used by any person other than the Privacy Officer to “alter, direct that changes be made to, delay, or prohibit the transmission to Congress” of its annual report. In addition, on January 5, 2007, legislation was introduced entitled Privacy Officer with Enhanced Rights Act of 2007. This bill, among other things, would provide the Privacy Officer with the authority to report directly to Congress without prior comment or amendment by either OMB officials or DHS officials who are outside the Privacy Office. Until its reports are issued in a timely fashion, questions about the credibility and authority of the Privacy Office will likely remain. The DHS Privacy Office has made significant progress in implementing its statutory responsibilities under the Homeland Security Act; however, more work remains to be accomplished. The office has made great strides in implementing a process for developing PIAs, contributing to greater output over time and higher quality assessments. The Privacy Office has also provided the opportunity for privacy to be considered at key stages in systems development by incorporating PIA requirements into existing management processes. However, the Privacy Office faces a difficult task in reviewing and approving PIAs in a timely fashion for the large number of systems that require them. Component-level privacy officers could help coordinate processing of PIAs. Until DHS appoints such officers, the Privacy Office will not benefit from their potential to help speed the processing of PIAs. Although the Privacy Office has made progress publishing new and revised Privacy Act notices since its establishment, privacy notices for DHS legacy systems of records have generally not been updated. The Privacy Office has not made it a priority to address the OMB requirement that existing notices be reviewed biennially. Until DHS reviews and updates its legacy notices as required by federal guidance, it cannot assure the public that its notices reflect current uses and protections of personal information. Further, the Privacy Office has not issued reports in a timely fashion, and its most recent annual report did not address all of the content specified by the Homeland Security Act, which requires the office to report on complaints of privacy violations. There are a number of factors contributing to the delayed release of its reports, including time required to consult with affected DHS components as well as the departmental clearance process, and there is no schedule for reviews to be completed and final reports issued. Late issuance of reports has a number of negative consequences beyond failure to comply with mandated deadlines, such as a perceived and real reduction in their value, a reduction in the office’s credibility, and the perception that the office lacks executive-level support. Until DHS develops a schedule for the timely issuance of reports, these negative consequences are likely to continue. We recommend that the Secretary of Homeland Security take the following four actions: Designate full-time privacy officers at key DHS components, such as Customs and Border Protection, the U.S. Coast Guard, Immigration and Customs Enforcement, and the Federal Emergency Management Agency. Implement a department-wide process for the biennial review of system- of-records notices, as required by OMB. Establish a schedule for the timely issuance of Privacy Office reports (including annual reports), which appropriately consider all aspects of report development, including departmental clearance. Ensure that the Privacy Office’s annual reports to Congress contain a specific discussion of complaints of privacy violations, as required by law. We received written comments on a draft of this report from the DHS Departmental GAO/Office of Inspector General Liaison Office, which are reproduced in appendix IV. In its comments, DHS generally agreed with the content of the draft report and its recommendations and described actions initiated to address them. In its comments, DHS stated that it appreciated GAO’s acknowledgement of its success in creating a standardized process for developing privacy compliance documentation for individual systems and managing the overall compliance process. DHS also stated that it appreciated recognition of the establishment of the DHS Data Privacy and Integrity Advisory Committee and the Privacy Office’s public meetings and workshops. In addition, DHS provided additional information about the international duties of the Privacy Office, specifically its outreach efforts with the European Union and its participation in regional privacy groups such as the Organization for Economic Cooperation and Development (OECD) and the Asian Pacific Economic Cooperation forum. DHS also noted that it had issued its first policy guidance memorandum regarding handling of information on non-U.S. persons. Concerning our first recommendation that it designate full-time privacy officers in key departmental components, DHS noted that the recommendation was consistent with a departmental management directive on compliance with the Privacy Act and stated that it would take the recommendation “under advisement.” DHS noted that component privacy officers not only make contributions in terms of producing privacy impact assessments, but also provide day-to-day privacy expertise within their components to programs at all stages of development. DHS concurred with the other three recommendations and noted actions initiated to address them. Specifically, regarding our recommendation that DHS implement a process for the biennial review of system of records notices required by OMB, DHS noted that it is systematically reviewing legacy system-of-records notices in order to issue updated notices on a schedule that gives priority to systems with the most sensitive personally identifiable information. DHS also noted that the Privacy Office is to issue an updated system-of-records notice guide by the end of fiscal year 2007. Concerning our recommendation related to timely reporting, DHS stated that the Privacy Office will work with necessary components and programs affected by its reports to provide for both full collaboration and coordination within DHS. Finally, regarding our recommendation that the Privacy Office’s annual reports contain a specific discussion of privacy complaints, as required by law, DHS agreed that a consolidated reporting structure for privacy complaints within the annual report would assist in assuring Congress and the public that the Privacy Office is addressing the complaints that it receives. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security and other interested congressional committees. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or send e-mail to koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objective was to assess the progress of the Department of Homeland Security (DHS) Privacy Office in carrying out its responsibilities under federal law, including the Homeland Security Act of 2002 and the E- Government Act of 2002. To address this objective, we analyzed the Privacy Office’s enabling statutes, Section 222 of the Homeland Security Act; Section 8305 of the Intelligence Reform and Terrorism Prevention Act of 2004; and applicable federal privacy laws, including the Privacy Act of 1974 and Section 208 of the E-Government Act, to identify DHS Privacy Office responsibilities. We reviewed and analyzed Privacy Office policies, guidance, and reports, and interviewed Privacy Office officials, including the Chief Privacy Officer, the Acting Chief of Staff, and the Director of Privacy Compliance, to identify Privacy Office plans, priorities, and processes for implementing its responsibilities using available resources. We did not review or assess the Privacy Office’s Freedom of Information Act responsibilities. To further address our objective, we assessed the Privacy Office’s progress by comparing the information we gathered with the office’s statutory requirements and other responsibilities. We evaluated Privacy Office policies, guidance, and processes for ensuring compliance with the Homeland Security Act, the Privacy Act, and the E-Government Act. We analyzed the system-of-records notices and PIA development processes and assessed the progress of the office in implementing these processes. This analysis included analyzing Privacy Office privacy impact assessment output by fiscal year and assessing improvements to the overall quality of published privacy impact assessments and guidance over time. In addition, we interviewed the DHS Officer for Civil Rights and Civil Liberties, component-level privacy officers at the Transportation Security Administration, US-Visitor and Immigrant Status Indicator Technology, and U.S. Citizenship and Immigration Services, and cognizant component- level officials from Customs and Border Protection, Immigration and Customs Enforcement, and the DHS Policy Office. We also interviewed former DHS Chief Privacy Officers; the chair and vice-chair of the DHS Data Privacy and Integrity Advisory Committee, and privacy advocacy groups, including the American Civil Liberties Union, the Center for Democracy and Technology, and the Electronic Privacy Information Center. We performed our work at the DHS Privacy Office in Arlington, Virginia, and major DHS components in the Washington, D.C., metropolitan area. In addition, we attended DHS Data Privacy and Integrity Advisory Committee public meetings in Arlington, Virginia, and Miami, Florida. Our work was conducted from June 2006 to March 2007 in accordance with generally accepted government auditing standards. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Ways to strike that balance vary among countries and according to the type of information under consideration. The version of the Fair Information Practices shown in table 1 was issued by the Organization for Economic Cooperation and Development (OECD) in 1980 and it has been widely adopted. The Use of Commercial Data. Report No. 2006-03. December 6, 2006. The Use of RFID for Human Identity Verification. Report No. 2006-02. December 6, 2006. Framework for Privacy Analysis of Programs, Technologies, and Applications. Report No. 2006-01. March 7, 2006. Recommendations on the Secure Flight Program. Report No. 2005-02. December 6, 2005. The Use of Commercial Data to Reduce False Positives in Screening Programs. Report No. 2005-01. September 28, 2005. Major contributors to this report were John de Ferrari, Assistant Director; Nancy Glover; Anthony Molet; David Plocher; and Jamie Pressman.
The Department of Homeland Security (DHS) Privacy Office was established with the appointment of the first Chief Privacy Officer in April 2003, as required by the Homeland Security Act of 2002. The Privacy Office's major responsibilities include: (1) reviewing and approving privacy impact assessments (PIA)--analyses of how personal information is managed in a federal system, (2) integrating privacy considerations into DHS decision making, (3) ensuring compliance with the Privacy Act of 1974, and (4) preparing and issuing annual reports and reports on key privacy concerns. GAO's objective was to examine progress made by the Privacy Office in carrying out its statutory responsibilities. GAO did this by comparing statutory requirements with Privacy Office processes, documents, and activities. The DHS Privacy Office has made significant progress in carrying out its statutory responsibilities under the Homeland Security Act and its related role in ensuring compliance with the Privacy Act of 1974 and E-Government Act of 2002, but more work remains to be accomplished. Specifically, the Privacy Office has made significant progress by establishing a compliance framework for conducting PIAs, which are required by the E-Gov Act. The framework includes formal written guidance, training sessions, and a process for identifying affected systems. The framework has contributed to an increase in the quality and number of PIAs issued as well as the identification of many more affected systems. The resultant workload is likely to prove difficult to process in a timely manner. Designating privacy officers in certain DHS components could help speed processing of PIAs, but DHS has not yet taken action to make these designations. The Privacy Office has also taken actions to integrate privacy considerations into the DHS decision-making process by establishing an advisory committee, holding public workshops, and participating in policy development. However, limited progress has been made in updating public notices required by the Privacy Act for systems of records that were in existence prior to the creation of DHS. These notices should identify, among other things, the type of data collected, the types of individuals about whom information is collected, and the intended uses of the data. Until the notices are brought up-to-date, the department cannot assure the public that the notices reflect current uses and protections of personal information. Further, the Privacy Office has generally not been timely in issuing public reports. For example, a report on the Multi-state Anti-Terrorism Information Exchange program--a pilot project for law enforcement sharing of public records data--was not issued until long after the program had been terminated. Late issuance of reports has a number of negative consequences, including a potential reduction in the reports' value and erosion of the office's credibility.
The Summer Food Service Program is a federal entitlement program that provides funds for program sponsors to serve free nutritious meals to children in low-income areas when school is not in session. It is administered by USDA’s Food and Nutrition Service, which provides money to state agencies to operate the program and to reimburse local eligible sponsors for meals served to children at designated locations. Eligible sponsoring organizations include (1) public or private nonprofit schools; (2) units of local, municipal, county, or state governments, such as county or city recreation programs; (3) private nonprofit groups, such as Boys and Girls Clubs or churches; (4) residential camps; and (5) National Youth Sports Programs. In fiscal year 1997, sponsors served over 128 million meals at a total federal cost of about $258 million. Local program sponsors can qualify to be reimbursed for the free meals served to all children 18 or younger by operating a site in an eligible area. Eligible areas are those in which at least 50 percent of the local children are from households with an income at or below the eligibility level for free and reduced-price school meals—185 percent of the federal poverty guidelines ($29,693 for a family of four in the summer of 1997). Sponsors can also qualify for reimbursements for free meals served to all children at sites not located in eligible areas if at least 50 percent of the children enrolled at such sites are eligible for free or reduced-price school meals. Finally, camps may be reimbursed only for meals that are served to children who have been individually determined to be eligible because of their household income. The meals that sponsors provide must meet the program’s nutritional requirements. Most sponsors can receive federal subsidies for only two meals per child per day. However, camps and programs primarily serving migrant children can receive subsidies for up to three meals each day for each child. This three-meal allowance is a change in the program made by the Welfare Reform Act. Previously, these sponsors could receive reimbursements for up to four meals per day. Sponsors are reimbursed in two different categories for their costs of preparing and serving free meals. One category covers the administrative costs incurred in the management of the Summer Food Service Program, such as office expenses, the support staff’s salaries, insurance, and some financial management costs. The second category covers the operating costs for purchasing, preparing, transporting, and serving the food; supporting program activities, paying salaries for the staff supervising the children; and providing transportation in rural areas. Sponsors must maintain records to document all costs and the number of meals they claim for reimbursements. Sponsors’ reimbursements are based on the lesser of (1) the number of approved meals served multiplied by the established rate for each type of meal or (2) the actual costs reported. The reimbursement rates for both administrative and operating costs are set by law and adjusted each year to reflect changes in the Consumer Price Index. Sponsors with costs that exceed the federal reimbursements must find other sources of funding to pay their additional costs. Some sponsors anticipate that program costs will exceed the federal reimbursements, and these sponsors offset their excess costs with other sources of funds. Other sponsors inadvertently exceed the federal reimbursements by small amounts because it is difficult to budget the program exactly. Because sponsors can claim reimbursements only for meals that are served to eligible children, their costs would exceed reimbursements if they have low attendance or prepare too many meals that cannot be claimed for reimbursements. The Welfare Reform Act reduced the operating subsidies for meals and snacks served under the Summer Food Service Program, effective for the 1997 summer. It did not reduce subsidies for administrative costs. Table 1 highlights the changes in the reimbursement rates for meals since 1996. While the 1997 and 1998 rates reflect increases to account for inflation, they are still lower than the rates established for 1996. Since the reduced reimbursements went into effect in 1997, the number of sponsors participating in the program has increased by 8 percent overall. Although there were some changes in sponsors from one year to the next, the characteristics of these sponsors—by the type of organization, number of sites operated, and number of meals and children served—remained about the same. Of those sponsors leaving the program, fewer than 10 percent left in each of the 2 years, and of these sponsors, few left because of the rate reduction, according to state officials. USDA and some states took actions following the rate reduction aimed at enrolling new sponsors and/or encouraging current sponsors to expand their programs. Since the reduced reimbursements have gone into effect, the total number of sponsors has increased by about 8 percent. According to the data provided by state officials, the total number of sponsors participating in the program rose from 3,753 in fiscal year 1996 to 3,875 in 1997, and to 4,046 in 1998. While some sponsors left the program, the number of sponsors joining it exceeded the number leaving. This overall increase in the number of sponsors occurred after virtually no increase from 1995 through 1996, according to USDA’s data. However, this overall increase is lower than the increases that occurred from 1991 through 1994, when the number of sponsors grew each year by about 8 to 10 percent. According to USDA officials, no growth occurred from 1995 to 1996 because of uncertainty stemming from the public policy debates over the future of the Summer Food Service Program and other child nutrition programs. Figure 1 shows the rate of change in the number of sponsors each year from 1991 through 1998. The types of organizations serving as program sponsors were similar in fiscal years 1996 and 1997. In both years, schools represented the largest group of sponsors (about 45 percent), followed by camps (19 percent), government agencies (about 17 percent), private nonprofit organizations (about 16 percent), and National Youth Sports Programs (3 percent). School sponsors can serve meals to children at school buildings as well as at other types of locations, such as parks and churches. Similarly, the rate reduction did not have a significant effect on the number of sites operated by sponsors. For example, in both fiscal years, half of the sponsors (51 percent) operated 1 site, and 6 percent operated 25 or more sites. In addition, between fiscal years 1996 and 1997, the total number of sites increased by 5 percent, from 29,220 to 30,587. As tables 2 and 3 show, the 1997 reduction in meal reimbursements did not appear to cause sponsors to change the size of their programs. There was little change in the size of sponsors—in terms of the number of meals and children served—between fiscal years 1996 and 1997. However, the number of meals and children served by individual sponsors varied greatly in each year. For example, one of the smallest nonprofit sponsors served 92 meals in 1997, while the largest—the New York City Board of Education—served over 8.8 million meals. In both years, a small percentage of the sponsors provided most of the meals. Only 5 percent of the sponsors served 62 percent of children participating in July of each year, 1996 and 1997. Similarly, 5 percent of the sponsors served 58 percent and 59 percent of all meals in 1996 and 1997, respectively. After the first year’s experience with the reduced reimbursements, both USDA and some state officials expressed concern that more sponsors would drop out of the program in the future than did in 1997 because of the reduced rates. USDA officials believed that some sponsors chose to continue participating in the program in 1997 to test their ability to manage the program financially with the reduced reimbursements and to determine whether they would continue in 1998. However, an increase in dropouts did not occur in 1998. According to the states’ information, a total of 675 sponsors have stopped participating in the Summer Food Service Program since the reimbursements were reduced. Of those that participated in fiscal year 1996, 370, or 9.9 percent, did not return in 1997; and 305, or 7.9 percent, of those that participated in 1997 did not return in 1998. According to state officials, few of the sponsors that dropped out of the program left specifically because of the reduced reimbursements. State officials identified the reduced reimbursements as the major reason for dropping out for only 37 (5.5 percent) of the 675 sponsors that left in fiscal years 1997 and 1998. For an additional 182 sponsors, state officials were unable to identify the major reason for dropping out. For some of these sponsors, the reduction may have been the reason they dropped out. The remaining 456 sponsors left for a variety of other reasons, such as the loss of personnel or lower participation than anticipated. Table 4 shows the number and percentage of sponsors leaving the program for various reasons since the rate reduction. According to our nationwide data, small sponsors were more likely to drop out of the Summer Food Service Program than large sponsors, and private nonprofit sponsors were more likely to drop out than other types of sponsors. This was also true for the sponsors that state officials identified as leaving the program because of the reduced reimbursements. For example, the sponsors that left the program in fiscal year 1997 because of the decrease had served an average of 207 children in July 1996, while the sponsors that remained in the program in 1997 had served an average of 809 children in July 1996. (App. II provides information on the dropout rates for various types of sponsors.) Some states and USDA took actions following the reduced reimbursements for meals served to enroll new sponsors and/or encourage current sponsors to expand their programs. These actions may have mitigated the impact of the rate reduction. For example, officials from several states said that they expanded outreach efforts to enroll more new sponsors. Some states’ Summer Food Service Programs—such as those in Minnesota, New York, Vermont, and Washington—received state funds to offset the loss of federal funds in fiscal years 1997 and 1998. In addition, in 1998, USDA began an initiative to encourage sponsors’ participation by conducting outreach and reducing the administrative burden on sponsors. To encourage participation in 1998, USDA provided program information to the American School Food Service Journal, the National Conference of Mayors, and the National Recreation and Park Associations for distribution to schools, local governments, and private nonprofit organizations. USDA also provided materials to the states for their own outreach efforts. Furthermore, the Department issued 13 policy memorandums to the states revising and clarifying policies to improve the program’s operations and reduce its administrative burden. Despite the actions by USDA and the states to encourage new organizations to participate, one state official reported that the reduced reimbursement rates discouraged some organizations from entering the program. The total number of children participating in the program continued to increase in 1997 after the reduced reimbursements, as it had in all but one of the previous 6 years. However, some of the children served by sponsors in 1996 lost access to the program in 1997 when those sponsors left the program. The number of meals served also increased in 1997, as in the prior 6 years, even though welfare reform limited the number of meals that some sponsors could claim for reimbursements. In the year following the implementation of the reduced rates, the number of children participating in the program increased overall, as it did in all but one of the prior 6 years. According to the latest available USDA data, the average number of children served daily in July for the entire program increased by 2.3 percent, from 2,215,625 in 1996 to 2,266,319 in 1997.Figure 2 shows the rate of change in children’s participation from 1991 to 1997. When we used the states’ data to determine which types of sponsors contributed to the growth in the number of children served after the rate decrease in 1997, we found that existing sponsors had expanded their programs while new sponsors in 1997 were smaller than the sponsors that left the program after 1996. Sponsors that participated in both fiscal years 1996 and 1997 served 8 percent more children, on average, in 1997 than in 1996. On the other hand, new 1997 sponsors did not serve as many children, on average, as sponsors that left the program after 1996. Specifically, new 1997 sponsors served 19 percent fewer children, on average, in 1997, than were served in 1996 by sponsors that left the program. Despite the overall increase in the number of children served between 1996 and 1997, some children lost access to program benefits because the sponsors serving them dropped out, and the children were not served by other sponsors. These fluctuations in sponsors’ participation and in children’s access to the program occur even in years when the program’s rules and reimbursements do not change, according to USDA officials. The 370 sponsors that participated in 1996 and did not return in 1997 served approximately 118,224 children per day in July 1996. According to our analysis of the information provided by state officials, at least 17,238, or about 15 percent, of these children lost access to the program in 1997. Sponsors that dropped out of the program specifically because of the reduction had served approximately 4,559 children in July 1996. At least 823, or 18 percent, of these children did not have access to the program in 1997. These 823 children represented about .03 percent of all children who participated in the program in 1996. (See app. II.) Other children may have lost access when continuing sponsors reduced the number of sites they operated because of the rate reduction. In fiscal year 1998, following the loss of 305 sponsors, at least 17,983 children, or 31 percent of the 58,562 children that had been served by these sponsors, lost access to the program. Of the 2,926 children served by sponsors that left the program specifically because of the reduction, at least 780 (27 percent) did not have access to the program through another sponsor in 1998. These children represent about .03 percent of all children who participated in the program in 1997. Appendix III provides more detailed information on our estimates of the number of children who had been served by sponsors that dropped out of the program and who were served by other sponsors the following year. The number of meals served by sponsors increased by over 2 percent—from over 125 million meals in fiscal year 1996 to over 128 million meals in 1997—the year after the reimbursements were reduced. The number of meals had increased during each of the previous 6 years reviewed. The increase in meals served in fiscal year 1997 would probably have been larger without the welfare reform change that affected camps and sponsors that primarily serve migrant children. As mentioned earlier, the 1996 act decreased the number of meals that these groups could claim for reimbursements from four to three. According to many state officials, if these sponsors had previously submitted four meals for reimbursements—breakfast, lunch, supper, and a snack—they did not submit the snack for reimbursement in 1997 because it has the lowest reimbursement rate. Our analysis of USDA’s data supports this conclusion. From 1996 to 1997, the number of meals for which sponsors requested reimbursements increased by only 2.4 percent. However, excluding snacks, the number of meals for which sponsors requested reimbursements increased by 4.7 percent. Figure 3 compares the rate of change for total meals and meals not including snacks for fiscal years 1991 to 1997. Because sponsors received lower federal reimbursements for their Summer Food Service Programs in 1997 and 1998, some adjusted their programs to lower their costs, according to many officials interviewed in our nationwide survey and the sponsors we visited. Frequently reported changes were to lower meal, labor, and/or location costs. Nevertheless, more sponsors reported operating costs that exceeded their reimbursements in 1997 than in 1996. In our telephone survey, officials in 24 states reported that there were program changes resulting from the loss of federal funds that were not related to the number of sponsors and children participating in the program. Some sponsors also told us that they made changes to mitigate the effects of the reduction. Following are frequently mentioned changes and examples of how these changes were implemented: Meal changes. The Maine program director reported that sponsors in his state had to select their food more cautiously, favoring less costly items. Wisconsin officials said sponsors provided more prepackaged juices in place of fruit and vegetables to decrease labor and food costs. In Hawaii, where the Department of Education prepares the food for sponsors to distribute, an official said that while the Department did not change the entree or fruit/vegetable servings, it did serve smaller bread and dessert portions to save money. The Georgia director said that sponsors served less fresh fruit and did not offer additional foods, such as desserts and chips, as often as in the past. Staff adjustments. Several sponsors we visited reported decreasing their labor costs in 1997 and/or 1998. For example, a school sponsor in South Carolina decreased its workers’ salaries from $7.25 per hour to $6.25 per hour. A school in Texas did not pay cafeteria personnel the cost-of-living adjustments that they had received during the school year. A school in Oregon hired fewer, less experienced staff and reduced their hours. Fewer meal sites. According to Pennsylvania officials, sponsors had to close or consolidate sites that were too expensive to operate. The small sites that served only 15 to 20 children could not afford to provide meals at the reduced rate. California officials also reported that some sponsors closed sites they could no longer afford to operate. For example, one school sponsor closed three of its six sites because it could no longer afford the labor costs. One of the largest sponsors in Texas, a school district, closed almost all the sites that served fewer than 50 children a day in 1997, and almost all the sites that served fewer than 100 children a day in 1998. However, the total number of children served by the sponsor has increased since 1996. More sponsors submitted operating costs that exceeded their federal reimbursements in fiscal year 1997 than in 1996. This means that more sponsors are covering some of the cost of operating the program with other funds. In 1996, prior to the rate decrease, 30 percent (1,084) of sponsors were reimbursed for all their reported program costs because these costs were equal to or lower than the maximum reimbursements (total meals multiplied by the reimbursement rates). The remaining 70 percent, or 2,565 sponsors, reported costs exceeding their federal reimbursements. In 1997, fewer sponsors—25 percent (966)—were reimbursed for their reported costs, while more sponsors—75 percent (2,831)—reported costs that exceeded their reimbursements. These totals include sponsors that dropped out of the program after 1996 and new sponsors in 1997. The limited impact on the number of sponsors, children, and meals served that has been observed to date is due in part to sponsors’ continuing to contribute funds to offset the decreased reimbursements. Appendix IV provides more details on the type of sponsors that reported operating costs in excess of their reimbursements. We provided USDA with copies of a draft of this report for review and comment. We met with Food and Nutrition Service officials, including the Branch Chief for Program Analysis and Monitoring Branch, Child Nutrition Division, who generally agreed with the report’s findings and provided us wih a number of technical comments that we incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees, interested Members of Congress, the Secretary of Agriculture, and other interested parties. We will also make copies available upon request. If you have any questions about this report, please call me at (202) 512-5138. Major contributors to this report are listed in appendix V. Because of questions raised about the impact that the reduction in the meal reimbursement rate might have had on the Summer Food Service Program, the Chairman of the House Committee on Education and the Workforce asked us to report on (1) the number and characteristics of sponsors participating in and dropping out of the program before and after the decrease in reimbursements, (2) the number of children and meals served by the program before and after the reduction, and (3) the changes sponsors made to their program as a result of the reduced reimbursements. To address these objectives, we conducted three nationwide surveys with officials from the 50 states. First, we conducted a telephone survey to gather information and views on the impact of the rate reduction on sponsors, children, and meals. Second, we sent a mail survey to officials in the 50 states to collect detailed fiscal year 1996 and 1997 data on their sponsors. The survey covered the number of meals, sites, July’s average daily attendance, and reported operating costs and reimbursements. State officials were also asked to identify (1) sponsors that dropped out of the program after 1996, (2) the major reason for dropping out, (3) the number of sites picked up by another sponsor, and (4) the portion of the children served by a new sponsor. For states that did not identify the sponsors that dropped out, we identified these sponsors by using financial and other data provided by the states. Two states did not provide average daily attendance for both years, and several provided different measures for attendance. In March 1998, we testified on the preliminary results of these surveys. Third, we sent a mail survey to the 50 states to collect their 1998 data. This third survey was aimed primarily at collecting information on the (1) number of sponsors that dropped out of the program after 1997, (2) reasons they dropped out, and (3) number of children who lost access to the program. To estimate the number of children who lost access to the program, we assigned percentages to the various response categories. For example, if state officials reported that “all or almost all” of participants were picked up by another sponsor, then we estimated that 95 percent of the participants had program access through another sponsor and 5 percent lost access to the program. We combined the 1996, 1997, and 1998 data provided by the states to provide a picture of welfare reform to date. For example, we determined the percentage of sponsors that reported costs higher than their reimbursements prior to and after the rate reductions. Although we did not verify the accuracy of all the data provided by the states for each of the over 4,000 sponsors, we did conduct a variety of tests to verify the data’s internal consistency. For example, we ensured that for sponsors categorized as dropping out after fiscal year 1996 there were no 1997 reimbursement data. We also identified any sponsors for which the data provided were outside of expected ranges. Where possible, we contacted the states to correct obvious errors. For a small percentage of sponsors, the state-provided information suggests the sponsors may have been overpaid. For example, some sponsors received reimbursements in excess of the costs they reported. A listing of these sponsors and their data will be provided to USDA for further review. We also compared our state data on the number of meals served in 1996 and 1997 with USDA’s data on the number of meals served for 1996 and 1997. Our total was generally within 5 percent of USDA’s total. To determine the changes sponsors made to their program because of the reduced rates, we selected 20 sponsors to study in greater detail. These sponsors—in California, Illinois, Oregon, Pennsylvania, South Carolina, and Texas—were selected to represent a variety of sponsor sizes and types (e.g., school, nonprofit, and government) in both rural and urban areas. We conducted in-depth, in-person interviews with these sponsors to determine the changes they had made to their programs in response to rate reductions. We observed the operation of the program—including meals being prepared by vendors, transported to sites, and eaten by children at 17 sites. In addition to conducting these in-depth studies and three surveys, we interviewed officials from USDA’s Food and Nutrition Service, the American School Food Service Association, and the Food Research and Action Center. We also reviewed related legislation, USDA documents—such as program guidance and budget data—and studies conducted by associations and states. We conducted our review from October 1997 through October 1998 in accordance with generally accepted government auditing standards. This appendix provides information for 1996 and 1997 on the total number of sponsors and the percent of sponsors that dropped out of the program in terms of the (1) number of children served, (2) number of meals served, and (3) type of sponsor. This appendix provides information on the number of children served by sponsors that dropped out of the program who did and did not have access to the program through other sponsors. Did not have access to the These other known reasons include low participation, personnel loss, and construction. For each sponsor that dropped out, state officials provided an estimate of the portion of participants served by the sponsor who were served by other sponsors in the next year. We assigned the following percentages to the various response categories: (1) “all or almost all”—95 percent had access and 5 percent did not; (2) “more than half”—75 percent had access and 25 percent did not; (3)”about half”—50 percent had access and 50 percent did not; (4) “less than half”—25 percent had access and 75 percent did not; and (5) “few, if any”—5 percent had access and 95 percent did not. In 1996, 38 of the dropouts did not operate in July. An additional 19 sponsors were in New Jersey or Arkansas, which did not provide data on the number of children served. These 57 sponsors served 336,744 meals in 1996, or 6 percent of all meals served by dropouts. Two of the 24 sponsors that left because of the rate reduction did not operate in July. Did not have access to the These other reasons include low participation, personnel loss and construction. For each sponsor that dropped out, state officials provided an estimate of the portion of participants served by the sponsor that was served by other sponsors in the next year. We assigned the following percentages to the various response categories: (1) “all or almost all”-95 percent had access and 5 percent did not; (2) “more than half”-75 percent had access and 25 percent did not; (3)”about half”—50 percent had access and 50 percent did not; (4) “less than half”—25 percent had access and 75 percent did not; and (5) “few, if any”—5 percent had access and 95 percent did not. Some programs did not operate in July and are not included in this table. In 1997, 42 of the total drop-outs did not operate in July. An additional four sponsors were in New Jersey, which did not provide data on the number of children served. These 46 sponsors served 152,038 meals in 1997 or 5 percent of all meals served by drop-outs. All sponsors that left due to the rate reduction operated in July. This appendix provides information, for fiscal years 1996 and 1997, on sponsors reporting costs that exceeded the maximum federal reimbursements, by participation category and by type of sponsor. Thomas Slomba, Assistant Director Leigh McCaskill White, Evaluator-in-Charge Peter Bramble, Senior Evaluator Fran Featherston, Senior Social Science Analyst Alice Feldesman, Supervisory Social Science Analyst Beverly Peterson, Senior Evaluator Carol Herrnstadt Shulman, Communications Analyst John W. Shumann, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Agriculture's (USDA) Summer Food Service Program and the impacts the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 had on the program, focusing on the: (1) number and characteristics of sponsors participating in and dropping out of the program before and after the decrease in the reimbursements; (2) number of children and meals served by the program before and after the reduction; and (3) changes sponsors made to their programs as a result of the reduced reimbursements. GAO noted that: (1) the reduction in meal reimbursements that resulted from the 1996 Welfare Reform Act had a minimal impact on the number and characteristics of the sponsors of the Summer Food Service Program; (2) since the reduction went into effect in 1997, the number of sponsors participating in the program increased by 8 percent overall, from 3,753 to 4,046; (3) the characteristics of the sponsors providing meals remained about the same between 1996 and 1997; (4) in both years, a relatively small percentage of sponsors--5 percent--served most of the children in the program; (5) most sponsors continued to be schools and camps; (6) in terms of the sponsors that dropped out after the welfare reform changes, fewer than 10 percent left the program in each of the 2 years since the reduced reimbursements have been in effect; (7) however, only a small percentage of these dropouts left because of the reduction, according to officials in the 50 states; (8) after the reduced reimbursements were mandated, USDA and some states took actions to maintain the level of participation of sponsors and children; (9) the number of children and meals served in fiscal year (FY) 1997 was greater than in previous years; (10) the total number of children participating in the program increased by over 2 percent after the reimbursements were reduced, to almost 2.3 million, in 1997; (11) the number of meals served rose by 2 percent, to over 128 million, despite a new restriction on the number of meals for which some sponsors could receive reimbursements; (12) state officials identified a small number of sponsors that left the program because of the reduced reimbursements; (13) over 820 children lost access to the program in FY 1997 because sponsors that had served them in 1996 did not participate in 1997 as a result of the reduction, and no other sponsor was available; (14) at least 780 children lost access in FY 1998; (15) other children may have lost access when continuing sponsors reduced the number of sites they operated because of the rate reduction; (16) in response to the reduced meal reimbursements, some sponsors reported making changes to their program operations; (17) even with the changes, more sponsors reported that their operating costs exceeded the amount they received in federal reimbursements in FY 1997 than in 1996; and (18) the limited impact on the number of sponsors, children, and meals served that has been observed to date is due in part to sponsors' continuing to contribute funds to offset the decreased reimbursements.
From the 1950s through 2001, U.S. bilateral economic assistance to the Middle East and North Africa focused primarily on promoting regional stability by providing funding for large bilateral military and economic programs, chiefly in Egypt, Israel, and Jordan, and by fostering development. In December 2002, the State Department announced the creation of MEPI, which has expanded U.S. assistance to other countries in the region and includes programs focused on democracy and reform. Figure 1 shows MEPI’s area of operation. According to the administration, MEPI comprises a new approach to foreign assistance for the region, aimed at linking Arab, U.S., and global private-sector businesses, NGOs, civil society elements, and governments to develop innovative policies and programs that support reform in the region. MEPI is also an important element of the administration’s counterterrorism and public diplomacy strategies. MEPI projects are organized under its four reform pillars (see fig. 2). MEPI’s budget of $293 million for fiscal years 2002-2005 was derived from supplemental appropriations and annual foreign operations legislation. Figure 3 shows MEPI’s budget for fiscal years 2002-2005. The Office of Regional Affairs within State’s Bureau of Near Eastern Affairs (NEA) in Washington, D.C., administered MEPI activities during MEPI’s first 6 months. The office worked with USAID, which managed the long-standing bilateral economic assistance programs in the Middle East and North Africa, to implement most of its initial projects. The office also worked with the embassies in the region to implement projects funded through small grants. In June 2003, State established the Office of the Middle East Partnership Initiative within NEA to manage MEPI policy and projects and work closely with agencies across the U.S. government. The Deputy Secretary of State serves as Coordinator for MEPI. In August 2004, MEPI set up two regional offices in Tunisia and the United Arab Emirates. (See app. II for more details of MEPI’s organizational structure as of May 2005.) In fiscal years 2002-2004, State and USAID formally reviewed existing bilateral economic assistance programs in the Middle East and North Africa. From June 2002 to May 2003, State and USAID jointly conducted an in- depth review of U.S. bilateral assistance to Egypt, one of USAID’s largest programs, including assessing the programmatic and financial data for the seven strategic objectives in USAID/Egypt’s program. From September to December 2003, State and USAID conducted an abbreviated review of USAID’s program in West Bank and Gaza. In 2003, State participated in USAID’s periodic strategy reviews of its programs in Jordan, Morocco, and Yemen. In 2003 and 2004, State also considered information from U.S. embassies and from other U.S. government agencies with programs in the region to identify areas where MEPI could provide assistance. During this period, MEPI staff responsible for each of MEPI’s four reform pillars organized meetings with the other U.S. government agencies. In addition, State relied on several secondary sources to supplement the information received from formal reviews of USAID programs and input from embassies and other U.S. government agencies. Finally, State officials met with donors such as the World Bank and the European Union to learn about their activities in the region. State and USAID have used the reviews of existing U.S. bilateral economic assistance programs, as well as information obtained from other U.S. entities providing assistance in the region, to realign many existing programs, strategic plans, and coordination mechanisms to conform to the new U.S. policy of promoting democracy and reform in the Middle East and North Africa. In fiscal years 2002-2003, MEPI and its administrative partners—U.S. embassies, USAID/Washington, and USAID missions—oversaw MEPI projects. MEPI and its administrative partners also obligated MEPI funds, with MEPI obligating about 45 percent of its fiscal years 2002-2003 funding and its administrative partners obligating the remainder. MEPI oversaw some of its projects directly in fiscal years 2002-2003 and partnered with U.S. embassies, USAID/Washington, and USAID missions to oversee other projects. (See fig. 4.) MEPI and its administrative partners negotiated agreements with NGOs, the private sector, and other U.S. government entities to implement more than 100 projects. MEPI and its administrative partners have obligated the $129 million that MEPI received for fiscal years 2002 and 2003. MEPI has directly obligated about 45 percent of the funds; MEPI’s administrative partners have obligated the remainder (see fig. 5). MEPI: MEPI has obligated about 45 percent ($58 million) of its 2002-2003 funds directly to NGOs, the private sector, and U.S. government entities. NGOs and the private sector: MEPI has obligated 33 percent ($42.8 million) of the funds in various grants directly to American or locally based NGOs or private companies. According to MEPI, these grants are intended to support innovative ideas that can be implemented quickly to produce concrete results, such as increasing women’s political participation. MEPI officials say that they would like to obligate more MEPI resources directly, but they acknowledge that such a shift would require MEPI staff to shoulder a significant administrative burden in monitoring these projects. U.S. government implementers: Through interagency acquisition agreements, MEPI has obligated about 12 percent ($15.2 million) of its 2002-2003 funds to U.S. government entities with specific expertise in the areas of reform that MEPI supports. These entities implement projects, such as the Department of the Treasury’s technical assistance project to develop expertise in surplus and debt management in Algeria. U.S. embassies: Embassies have obligated about 1 percent ($1.4 million, or $100,000 per country) of MEPI’s 2002-2003 funds through its small-grants program. A central objective of the program is to build the capacity of local organizations, such as the Ibn Khaldun Center, which, among other things, introduces concepts of economic opportunity to young women in two of Egypt’s rural and urban areas to enable them to improve their living conditions. The embassies use the funds to award local organizations small, short-term grants of up to approximately $25,000. The MEPI office approves all small-grant decisions, and embassy staff perform grant administration. USAID/Washington: USAID/Washington has obligated about 40 percent ($50.8 million) of MEPI’s fiscal years 2002-2003 funds, including funding for many of MEPI’s largest projects. According to USAID and MEPI officials, many of these projects have been implemented through preexisting cooperative or contractual agreements between the U.S. government and U.S. NGOs or contractors. USAID missions: USAID missions have obligated about 15 percent ($18.7 million) of MEPI’s 2002-2003 funds, according to budget data provided by USAID and MEPI. These funds have been used to complement USAID’s existing efforts, such as civil society capacity building and local political party training. Figure 6 shows the obligation of MEPI funds to Middle Eastern and North African countries and West Bank and Gaza in fiscal years 2002-2003. According to State, the reviews of U.S. bilateral economic assistance have been used to (1) identify new reform areas that were not previously addressed and (2) develop a results-based strategy for implementing and evaluating the performance of MEPI activities. Responding to the reviews of U.S. bilateral economic assistance in the region, as well as information provided by other U.S. entities, MEPI has initiated reform activities that were not being addressed by U.S. agencies in the region, including countries where USAID does not operate. For example: Political reform: The reviews found that in many countries where MEPI operates, little progress had been made in political reform, including women’s political involvement. In response, MEPI has provided funds to help increase the number of women candidates and assist them through activities such as improving their media presentation techniques. Economic reform: The reviews revealed that although limited assistance was being provided to increase trade capacity within the region, U.S. agencies were not addressing the need to increase the access and capacity of small and medium-sized enterprises in countries such as Morocco. Regionally, MEPI has supported a program that aims to, among other things, increase managerial and entrepreneurial skills in several countries. MEPI also funds efforts to teach business people and managers of small and medium-sized enterprises how to take immediate advantage of U.S. bilateral free trade agreements by initiating export sales, acquiring new technology, and developing joint ventures and other strategic alliances with U.S. firms (see fig. 7). Educational reform: U.S. agencies operating in the region identified gaps in education for the impoverished and for students in primary and secondary grade levels. MEPI funds implementers that provide Arabic books for elementary school children in Jordan, Lebanon, and Bahrain and teach Bahraini mothers and preschool-age children living in poverty. (See fig. 8.) Women’s empowerment: U.S. agencies identified the lack of gender equity as an impediment to reform in the region. To create more opportunities for women, MEPI funds projects that provide young women from the Middle East and North Africa unique opportunities to learn management and business skills in the classroom and while working for U.S. companies. (See fig. 9.) The 2002 and 2003 reviews of U.S. bilateral economic assistance in the Middle East and North Africa have significantly influenced MEPI officials’ development of MEPI’s results-based strategy, including the objective of providing close monitoring of MEPI projects. In particular, according to State, the comprehensive USAID/Egypt review cited weaknesses in USAID/Egypt’s mission performance monitoring system that limited the mission’s ability to strategically reallocate resources away from programs that were not achieving their objectives. According to one of the review’s principal authors, USAID/Egypt did not monitor its projects closely or frequently enough to obtain the performance information needed. State recommended that USAID make changes to the performance monitoring system, including engaging consultants and producing more rigorous, succinct, and integrated qualitative and quantitative measurements of performance at all levels of activity. MEPI officials told us that these observations made MEPI management aware of the need to monitor projects’ short-term performance and hold project implementers accountable for results. MEPI’s September 2004 project monitoring and management plan indicates that all project agreements should include benchmarks and performance goals that reflect the results MEPI hopes to attain, based on pillar-specific objectives. The plan also states that, to prevent negative impacts on project outcomes, staff should ensure that project implementers are on the right track at all stages of project development and implementation. However, neither MEPI’s general strategy nor its monitoring and management plan provides specific guidance regarding project monitoring. MEPI’s ability to monitor the performance of its projects and measure results has been limited by (1) unclear communication of monitoring roles and responsibilities and (2) a lack of complete project information. MEPI has not clearly communicated roles and responsibilities for project monitoring in accordance with federal standards for management control, and as a result, not all projects have received the level of monitoring needed to measure results. According to MEPI officials, MEPI project monitoring responsibilities generally would include participation in project activities, regular review and assessment of implementer performance reports, and regular feedback to implementers on project performance. However, we found that some MEPI administrative partners, particularly those overseas, have been confused about their roles and responsibilities for project monitoring and that some MEPI activities have been monitored sporadically or not at all. Officials from 9 of the 14 U.S. embassies supporting MEPI projects stated that they were uncertain of their roles and responsibilities for monitoring MEPI projects and that MEPI had not yet specified their monitoring duties. USAID/Washington officials told us that USAID mission staff were expected to assist USAID/Washington in monitoring MEPI projects that it administers. However, USAID mission officials in Rabat, Morocco, told us that MEPI projects administered by USAID/Washington were not being monitored and that they had not received instructions for monitoring these projects. Officials at USAID/Washington acknowledged that it had not performed many of its monitoring duties in the field and stated that USAID/Washington is currently working to ensure better communication and provide instructions to the missions regarding their support of project monitoring. In addition, MEPI and USAID officials in Washington, D.C., as well as MEPI regional office, embassy, and USAID mission office staff, said that they were uncertain of the role that the MEPI regional offices would play in monitoring MEPI projects. Although MEPI officials generally expect that regional office staff will lead the monitoring of projects that MEPI directly administers, currently only one of MEPI’s two regional offices has a monitoring specialist on staff. In April 2004, the Department of State’s Office of Inspector General (OIG) expressed concern that MEPI was communicating inconsistent priorities and causing confusion among a number of embassies. MEPI and USAID officials in Washington, D.C., acknowledged that confusion exists regarding roles and responsibilities for project monitoring. MEPI officials stated that they were still defining, and working toward agreement with USAID and other parties regarding, MEPI and its administrative partners’ respective roles and responsibilities. MEPI officials also stated that on July 14, 2005, they signed an addendum to an October 2004 memorandum of agreement (MOA), establishing a framework for project management roles and responsibilities—including monitoring—of MEPI and USAID officials in Washington and in the Middle East and North Africa. MEPI’s monitoring of project performance, in accordance with federal standards for management control, has been limited by (1) a lack of baseline information, (2) inconsistency among projects’ performance reporting requirements, (3) unverified project information, (4) inconsistent communication of project information, (5) incomplete project records, and (6) lack of access to project information. Lack of baseline information: Our examination of a selected sample of MEPI project documents showed that for most of these projects, MEPI lacked baseline information against which to measure their performance. According to MEPI’s strategy, baseline data are necessary for measuring future progress on its projects, and MEPI officials told us that it was important to establish these measurements at the beginning of each project as the basis for determining project performance. However, of the project agreements we received from MEPI and USAID for the 25 projects in our selected sample, only three project agreements contained a requirement to establish a baseline, and none of the projects reported baseline measurements. MEPI officials told us that they were aware that they had not required baseline measurement in all of their project agreements but said that they planned to do so in future agreements. Inconsistent reporting requirements: MEPI project agreements do not consistently require that implementers report on quantitative, measurable performance indicators. As a result, some implementing organizations report general project information instead of the measurable indicators of performance that MEPI has stated are needed to manage a results-based program. According to one MEPI official, some of the reports are therefore too vague to be useful in monitoring project performance. In our selected sample of 25 projects, a substantial majority of the performance indicators required in project agreements were qualitative in nature rather than the quantitative indicators of project performance that MEPI has stated are necessary for managing MEPI as a results-based program. Further, a February 2005 report by USAID’s OIG reported that 6 of 17 USAID/Morocco MEPI projects did not have indicators to measure project performance. One MEPI official told us that they sometimes negotiate requirements for additional indicators informally over the telephone and in e-mail after signing a project agreement. However, these informally negotiated requirements are not formally documented, readily available, or tracked by MEPI or the implementers. MEPI officials said that they are seeking to acquire grant-tracking software that would facilitate the documenting of agreed-on indicators and closer monitoring of implementers’ reporting on these indicators. Unverified project information: MEPI’s ability to verify the information reported in implementers’ quarterly reports and to provide detailed performance feedback to implementers has been limited, because MEPI and its administrative partners have not consistently observed project activities. In the sample we analyzed, only 8 of the 19 implementers we interviewed said that MEPI and its administrative partners had made on- site visits to discuss and observe project activities. According to embassy, MEPI, and USAID mission staff, they often do not have time to monitor MEPI projects, including visiting or observing project activities, because of conflicting demands on staff time. USAID OIG’s February 2005 report stated that, primarily because of limited resources, the USAID/Morocco mission had not validated the performance data reported by implementers. Inconsistent communication of project information: MEPI has inconsistently communicated project information, particularly to embassies and USAID missions overseas. Overseas embassy and USAID officials stated that MEPI had not always informed them of important project information, including when new projects were to begin in their country of operation. USAID’S OIG February 2005 report noted that MEPI and USAID/Washington had not always notified USAID/Morocco when they awarded regional activities taking place within the country. As a result, in some cases, embassy officials have had little or no knowledge of some of the implementer’s activities. Officials from two embassies and one USAID mission told us that MEPI project implementers had arrived in country expecting embassy and USAID assistance, although the embassies and USAID missions had received little or no prior notice of the projects’ start or the implementers’ needs. In another example, in one country we visited, we arrived expecting to meet with a particular project implementer but found that embassy officials there were unaware of the project or the implementer’s operations in country. Further, when embassies have communicated with MEPI, MEPI has not always been responsive. Three embassies stated that, since MEPI’s inception, MEPI had provided no response or guidance in response to their regular reports on MEPI activities. However, according to MEPI officials, embassies have not always responded to MEPI’s communications. For example, according to one senior MEPI official, after the summer 2003 awarding of small grants through the embassies, the majority of embassies administering MEPI projects did not respond to December 2003 requests from MEPI for basic evaluative descriptions of the projects the embassies were administering. According to this senior official, the embassies’ failure to respond to MEPI resulted, in some cases, in MEPI’s lacking records of actual activities conducted and descriptions of successes and lessons learned. Incomplete project records: MEPI and USAID have not maintained complete records for all of the projects that they administer, which has limited their ability to monitor project performance. We found gaps in records that the MEPI office in Washington, D.C., maintained for the projects that it administered. For example, in our selected sample of 25 projects, MEPI’s files for fiscal year 2003 projects that it administered directly contained 44 percent of quarterly reports that project implementers were to have submitted. MEPI officials said that the grant- tracking software they sought to acquire would enable them to track the submission of implementers’ quarterly reports. Without such software, the MEPI office has managed project data on Excel spreadsheets and office calendars, using the incomplete copies of quarterly reports that it has maintained since the program’s inception. MEPI officials in the past have asked implementers to provide copies of records missing from their files. In addition, we found that USAID had incomplete records for the projects that it administers for MEPI. For example, in our selected sample, USAID’s files for the fiscal year 2003 projects that it administered contained 63 percent of MEPI project agreements with implementers and 62 percent of the quarterly reports that project implementers were to have submitted. USAID officials stated that their records were scattered in many locations—in the various bureaus in Washington, D.C., that administer MEPI projects and at USAID missions in the Middle East and North Africa. In other cases, USAID has had to obtain copies of the records from the implementing organizations themselves. Lack of access to project information: MEPI has lacked ready access to information on USAID-administered MEPI projects, which has limited its ability to monitor those projects’ performance. According to MEPI’s strategy and officials, MEPI is responsible for ensuring the performance of all projects that it funds, including those administered by USAID. However, MEPI and USAID have maintained separate performance and financial records, and MEPI and USAID officials stated that prior to the July 2005 signing of the MOA addendum regarding monitoring roles and responsibilities, they had not reached agreement on the sharing of these records. Further, USAID officials stated that USAID has not maintained records for MEPI projects separately from its records for non-MEPI- funded projects. Some USAID implementers responsible for implementing MEPI projects have included MEPI performance reporting in their general reporting on the region, making USAID’s extraction of information on MEPI projects more difficult and time consuming. MEPI and USAID officials said that they expected the recently signed addendum to the 2004 MOA to facilitate the sharing of performance and financial information. In accordance with the U.S. foreign policy of promoting democracy and reform in the Middle East and North Africa , State established MEPI to design and fund projects supporting political, economic, and educational reform and the empowerment of women. However, despite MEPI’s strategic emphasis on monitoring projects’ performance, a failure to clearly communicate roles and responsibilities and a lack of complete project information have hampered MEPI’s monitoring of its projects. In July 2005, subsequent to receiving our preliminary findings regarding the need to communicate project monitoring roles and responsibilities to its administrative partners, State finalized an agreement with USAID that establishes a framework for roles and responsibilities and coordination between the agencies. Although these guidelines represent progress in clarifying roles and responsibilities, it remains essential that specific monitoring duties be established for, and communicated to, all parties involved in each MEPI project. Without the ability to evaluate its projects’ performance with certainty, and lacking access to complete information, MEPI’s capacity to meet its strategic goals of producing tangible results and making results-based decisions is limited. To bolster MEPI’s ability to monitor and evaluate project performance, and to help ensure that MEPI achieves its goals of producing tangible results and making results-based decisions, we are making three recommendations to the Secretary of State. We recommend that the Secretary of State ensure that MEPI managers clearly delineate, document, and communicate roles and responsibilities systematically obtain, maintain, and communicate complete information regarding all MEPI projects, including performance and financial data; and regularly assess MEPI’s progress in communicating roles and responsibilities and obtaining, maintaining, and communicating key information. We provided a draft of this report to the Secretary of State and the Administrator of USAID for their review and comment. Both officials provided written responses, which we have printed in appendixes III and IV. State and USAID also provided technical comments that we incorporated as appropriate. Although State and USAID agreed with our recommendations, State disagreed with the extent of our finding that it could not with certainty evaluate its projects’ performance; however, it did not point out specific aspects of that finding with which it disagreed. State’s comments acknowledged our recommendations regarding bolstering its ability to monitor and evaluate project performance. In its comments, State said that its July 2005 agreement with USAID regarding project management should improve the management, information flow, documentation, and monitoring of USAID-administered MEPI projects. In addition, State said that it would implement a monitoring and evaluation timetable for every project, identify projects that require on-site monitoring in the next year and assign responsibility to the appropriate MEPI action offices. State’s comments also laid out other general steps that, if taken, would help ensure more comprehensive project monitoring, including two steps to guarantee complete and accurate project information: (1) ensuring that files on every project are complete and (2) implementing an integrated grants and project management database system. In addition, State said that it would provide ongoing training in grants and project management for staff both in MEPI Washington and at MEPI regional offices and engage outside consultants to provide project monitoring support where necessary. State did not comment on our recommendation that it monitor progress on clearly delineating, documenting, and communicating roles and responsibilities for monitoring and systematically obtaining, maintaining, and communicating complete information about all MEPI projects. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and to the Secretary of the State and the Administrator of USAID. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-4128 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In this report, we (1) describe MEPI’s structure for administering projects and obligating funds, (2) examine MEPI’s uses of the reviews of U.S. bilateral economic assistance in the region, and (3) evaluate MEPI’s monitoring of its projects. To describe the administration of MEPI projects and obligation of MEPI funding, we examined budget and programmatic documentation provided by State and USAID. This documentation included information on funding levels and project administration for fiscal year 2002 and 2003 projects. We also interviewed State and USAID officials familiar with MEPI budget and project administration, in particular for fiscal years 2002 and 2003. Although our audit generally covered MEPI activities from December 2002 through May 2005, in reviewing MEPI’s budget information and project monitoring, we focused on data for fiscal years 2002-2003, because they were the most complete data that MEPI could provide. To describe MEPI’s uses of the 2002-2004 reviews of U.S. bilateral economic assistance to the region, we examined (1) State’s and USAID’s joint reviews of programs in Egypt and West Bank and Gaza; (2) USAID’s reviews of democracy and governance for Algeria, Bahrain, Jordan, Morocco, Tunisia, and Yemen; and (3) other supplemental information provided by U.S. embassies and USAID missions in the region, including mission performance plans and USAID country strategies. We also conducted interviews with officials at State and USAID headquarters in Washington, D.C.; officials representing embassies and USAID missions in 9 of the 14 MEPI countries; and officials administering MEPI programs in the territories of West Bank and Gaza. To assess MEPI’s mechanisms for monitoring its activities, we obtained information from MEPI on all of its fiscal year 2003 projects, including information on each project’s pillar, MEPI partner, implementing organization, funding level, project location, and project type (country- specific or regionwide). From this information, we selected a nonprobability sample of 25, or approximately 34 percent, of the 73 MEPI projects funded in fiscal year 2003. This sample also represented about 63 percent of MEPI’s $100 million in fiscal year 2003 project funding. We used the sample to examine project monitoring performed by MEPI and its administrative partners and reports provided by project implementers. Our sample included regionwide projects and projects from the countries we would be visiting, projects with the highest funding levels for fiscal year 2003, projects administered both by MEPI and USAID, and projects from each of the four MEPI pillars. The documentation that we reviewed included project proposals, project agreements, scopes of work and work plans, and quarterly reports for the 25 projects. Our analysis of these documents clarified the nature of MEPI’s monitoring and reporting requirements and supplemented and validated information that we obtained through interviews. We developed a structured interview instrument to systematically collect data on the 25 projects, which included questions on implementer reporting requirements, the use of performance baselines and indicators, and feedback and verification by MEPI and its administrative partners. We analyzed the interview data by developing categories for each relevant response, using the categories to code the data, and tallying the codes to obtain the number of projects to which each response was applicable. We also interviewed officials from MEPI, USAID, U.S. embassies, and implementing organizations in the United States, Bahrain, Egypt, and Morocco regarding MEPI project monitoring. We selected these MEPI- participating countries because (1) Bahrain is a high-income country and does not have a traditional U.S. bilateral assistance program; (2) Egypt, a lower-middle-income country, is one of the largest recipients of U.S. bilateral development funding; and (3) Morocco, a lower-middle-income country, was the largest recipient of fiscal year 2003 MEPI funding. We also traveled to Germany during a MEPI regional conference, where we met with embassy and USAID officials responsible for project monitoring in 8 of the 14 MEPI countries and in the West Bank and Gaza. In addition, we used U.S. government internal control standards to assess relevant areas of MEPI’s management control system that affected project monitoring, including (1) roles and responsibilities of MEPI staff and administrative partners in Washington, D.C., and overseas and (2) information flow. We conducted our fieldwork in Washington, D.C., and in Bahrain, Egypt, Germany, and Morocco from July 2004 to May 2005 in accordance with generally accepted government auditing standards. Manager (Grants) Manager (Grants) The following are our comments on the Department of State’s letter dated July 22, 2005. 1. State commented that it agreed with GAO’s conclusion that the Department of State has effectively launched MEPI reform programs. However, GAO did not audit the effectiveness of MEPI projects and did not present such a conclusion in this report. 2. State commented that our report recognized that MEPI had established effective partnerships with other bureaus at State, with other U.S. government agencies, and with a wide range of nongovernmental organizations (NGOs), including the private sector. However, our report did not present such a conclusion. Our report describes rather than assesses the relationships between MEPI and other U.S. government and nongovernment entities. 3. State said that it disagreed with the extent of our finding that MEPI cannot with certainty evaluate its projects’ performance, but it did not specify aspects of that finding with which it disagreed. State’s comments acknowledged our recommendations regarding bolstering its ability to monitor and evaluate project performance and laid out general steps that, if taken, would help ensure more comprehensive project monitoring. For example, State said that it would ensure the completion of every project’s files, to allow for periodic reviews of activity status and project outcomes and shortcomings; engage outside consultants to provide project monitoring and evaluation support, where necessary; and implement an integrated grants and program management database system. In addition, State pointed out that, subsequent to receiving our preliminary findings in May, it finalized, on July 14, 2005, an addendum to an October 2004 memorandum of agreement (MOA) with USAID on USAID’s administration of MEPI projects. This MOA addendum establishes a framework for roles and responsibilities and coordination between State and USAID. In its comments, State said that this agreement should improve the management, information flow, documentation, and monitoring of USAID-administered MEPI projects. In addition to the individual named above, Zina Merritt (Assistant Director) as well as David Dornisch, Suzanne Dove, Reid Lowe, Grace Lui, and Addison Ricks made key contributions to this report.
In December 2002, the U.S. Department of State (State) established the Middle East Partnership Initiative (MEPI) to promote democracy in the Middle East and North Africa. MEPI provides assistance for political, economic, and educational reform and women's empowerment. In fiscal years 2002-2004, State and the U.S. Agency for International Development (USAID) reviewed U.S. bilateral economic assistance programs in the region to ensure they were aligned with the new U.S. policy focus on promoting democracy and reform. In this report, GAO (1) describes MEPI's structure for managing projects and allocating funding, (2) examines MEPI's uses of the reviews, and (3) evaluates MEPI's project monitoring. MEPI has worked with U.S. embassies, USAID headquarters in Washington, D.C., and USAID missions overseas to manage projects and obligate funding. In turn, MEPI and its partners have negotiated agreements with nongovernmental organizations, the private sector, and other U.S. agencies to implement the projects. MEPI has obligated about 45 percent of the $129 million that it received for fiscal years 2002-2003, and its partners have obligated the remainder. MEPI used the State and USAID reviews of existing U.S. bilateral economic assistance programs in the Middle East and North Africa in two ways. First, in response to the reviews, MEPI targeted reform activities in the Middle East and North Africa that were not being addressed by other U.S. agencies. For example, responding to the reviews' finding that little progress had been made in supporting women's political involvement, MEPI provided funds to assist women candidates. Second, MEPI shaped its strategy in response to the reviews, particularly regarding the need to monitor projects' short-term results and hold project implementers accountable for project performance. Despite its strategic emphasis on monitoring projects' performance, MEPI's monitoring has been limited by unclear communication of roles and responsibilities and a lack of complete project information. MEPI has acknowledged these deficiencies and begun to address them; in July 2005, State and USAID agreed on a framework for project monitoring roles and responsibilities. Without the ability to evaluate its projects' performance with certainty and access to complete information, MEPI's capacity to meet its strategic goals of producing tangible results and making results-based decisions is limited.
The regulatory structure of the U.S. securities markets was established by the Securities Exchange Act of 1934, which created SEC as an independent agency to oversee the U.S. securities markets and their participants. Similarly, in 1974 the Commodity Exchange Act established CFTC as an independent agency to oversee the U.S. commodity futures and options markets. Both agencies have five-member commissions headed by chairpersons who are appointed by the President of the United States for 5-year terms. Among other things, the commissioners approve new SEC and SRO rules and amendments to existing rules. They also authorize enforcement actions. SEC and CFTC are headquartered in Washington, D.C. SEC has a combined total of 11 regional and district offices; CFTC has 5 regional offices. Within SEC and CFTC, the divisions of enforcement are responsible for investigating possible violations of the securities and futures laws, respectively. With their commissions’ approval, they litigate or settle actions against alleged violators in federal civil courts and in administrative actions. Typically, enforcement staff investigate alleged violations of law, prepare a memorandum for the commissioners that describes alleged violations, and, if appropriate, make recommendations for further action. When the commissions decide that a case warrants further action, they can authorize filing a civil suit against the alleged violator in federal district court or instituting a proceeding before an administrative law judge. If either the court or the administrative law judge finds that a defendant has violated securities or futures laws, it can issue a judgment ordering sanctions such as fines and disgorgements and, in the case of futures violations, restitution; it can also bar or suspend violators from the securities and futures industries. The collection process for delinquent debt begins when all or part of a fine or disgorgement becomes delinquent because the violator has failed to pay some or all of the amount due by the date ordered by the court or administrative law judge. If the court or administrative law judge has not specified a payment date and no stay has been entered, SEC considers the debt delinquent 10 days after the court enters the judgment. CFTC officials told us that absent an appeal, they consider the debt delinquent 15 or 60 days after the administrative law judge or court entered the judgment in administrative and civil cases, respectively. SEC and CFTC collect delinquent monetary judgments primarily through post-judgment litigation, negotiating payments with defendants, and making referrals to the Department of Treasury or the Department of Justice. In accordance with the Debt Collection Improvement Act of 1996, SEC and CFTC have each entered into an agreement with the Department of Treasury to improve collections. Under this act, federal agencies are required to submit all nontax debts that are 180 days delinquent to Treasury’s FMS. The act also requires that FMS either take appropriate steps to collect the debt or terminate collection actions. In addition to using traditional methods to collect these debts, such as sending demand letters and hiring private collection agencies, FMS can use TOP. Under TOP, FMS identifies federal payments, such as tax refunds, that are owed to individuals and applies the payments to their outstanding debt. All cases referred to FMS for collection are also eligible for referral to and servicing under TOP. FMS also uses collection agencies to negotiate compromise offers with individual debtors. A compromise offer is an agreement between a federal agency and an individual debtor, in which the federal agency agrees to discharge a debt by accepting less than the full amount. Once the collection agency negotiates a compromise offer with a debtor, it forwards the offer to FMS. In the absence of an agreement between FMS and the federal agency to approve compromise offers on its behalf, FMS refers the offer to the federal agency for final approval. The U.S. securities and futures markets are regulated under their respective statutes through a combination of self-regulation (subject to federal oversight) and direct federal regulation. This regulatory scheme was intended to give SROs responsibility for administering their own operations, including most of the daily oversight of the securities and futures markets and their participants. Two of the SROs—NASD and NFA—are associations that regulate registered securities and futures firms and oversee securities and futures professionals, respectively. The remaining SROs include national exchanges that operate the markets where securities and futures are traded. These SROs are primarily responsible for establishing the standards under which their members conduct business; monitoring the way that business is conducted; and bringing disciplinary actions against their members for violating applicable federal statutes, their own rules, and the rules promulgated by their federal regulator. SROs can impose fines and other sanctions against members that violate securities or futures laws or SRO rules, as applicable, through their enforcement and disciplinary processes. Some SROs’ disciplinary proceedings are decided by a hearing panel, which examines the evidence and decides on the appropriate sanction. SROs’ actions are usually initiated by a customer complaint, a compliance examination, market surveillance, regulatory filings, or a press report. SEC and CFTC have taken actions to improve their collection programs, addressing the three recommendations in our 2001 fines report. However, it was too early to assess the effectiveness of their actions. After we made our first recommendation, SEC took various steps, among them, implementing collections guidelines that were intended to ensure that eligible delinquent cases are referred to FMS, including TOP. But SEC’s actions have not ensured that all eligible cases are referred. To address our second recommendation, SEC developed procedures for responding to compromise offers submitted by FMS within 30 days. To address our third recommendation, CFTC implemented procedures for ensuring the timely referral of delinquent cases to FMS for collection. SEC implemented regulations, related procedures and guidelines, and a collections database intended to ensure that eligible delinquent cases are referred to FMS, including TOP, as required by the Debt Collection Improvement Act of 1996. However, SEC has focused on referring post- guidelines cases, and it was too early to assess the effectiveness of SEC’s strategy as it related to these cases. In contrast, SEC did not have a formal strategy for referring pre-guidelines cases and, further impeding its collection efforts, it did not have a reliable agencywide system for tracking monies owed in these cases. Recognizing that its system was unreliable, SEC has drafted a two-phase action plan under which it will implement a centralized agencywide tracking system for all delinquent debt. However, it has not established a time frame for fully implementing the computer system for the second phase of the plan. We recommended in our 2001 report that SEC take steps to ensure that regulations allowing SEC’s delinquent fines to be submitted to TOP be adopted so that SEC would benefit from the associated collection opportunities. At the time of our review, SEC officials had told us that they had rewritten their rules for using TOP but that they could not estimate when the rules would be approved by the commission or implemented. After we made our recommendation, SEC amended its debt collection regulations. In April 2002, SEC implemented related procedures to allow cases to be forwarded to TOP. Consistent with the Debt Collection Improvement Act of 1996, the procedures required that cases be referred to FMS after they had been delinquent for more than 180 days. SEC subsequently issued additional guidelines and implemented a collections database that were intended to ensure that eligible delinquent post- guidelines cases are referred to FMS, including TOP, within 180 days of becoming delinquent. SEC imposed the more stringent requirement on itself in recognition of the enhanced probability of collecting monies ordered on newer cases. The guidelines provided more detailed instructions for staff on how to pursue collections, specifying steps for referring eligible delinquent cases to FMS, including TOP, within 180 days. According to an agency official, the guidelines went into effect agencywide on September 2, 2002. SEC also created a collections database for all post-guidelines fines and disgorgement cases that is maintained by headquarters and each regional or district office, as applicable. The database tracks actions that staff have taken to recover debt on delinquent cases, including preparing cases for referral to FMS, and is used to help ensure that staff are following the new collections guidelines. SEC officials told us that the agency was tracking only post-guidelines cases because the database had limited storage capacity and could become unstable if too many cases were added. In addition, the agency has assigned attorneys and administrative staff to every office to maintain the database and its related collection activities for delinquent cases, including ensuring that eligible cases are referred to FMS and TOP in a timely manner. According to an agency official, these staff received training on using the guidelines in the fall of 2002. It was too early to fully assess the effectiveness of SEC’s strategy for tracking, collecting, and referring post-guidelines cases, because most of these cases were not yet 180 days delinquent. Based on a judgmental sample of 66 cases, we identified 4 delinquent fines and disgorgement cases valued at $4 million that were eligible for referral as of March 31, 2003. We found that SEC had referred two of the four cases within the 180-day time frame and was preparing the other two for referral. Although SEC had developed controls to better ensure that eligible post- guidelines cases were promptly referred to FMS and TOP, it had not developed a formal strategy for referring eligible pre-guidelines cases. Such a strategy would include prioritizing cases based on their collection potential and establishing time frames for making the referrals. Further impeding its collection efforts, SEC’s original system for tracking monies owed in pre-guidelines cases—DPTS—was not reliable. As a result, SEC could not identify all the cases that had not been referred to FMS and TOP. SEC officials told us that the agency’s April 2002 procedures applied to the pre-guidelines cases and that agency attorneys had followed these procedures in referring some pre-guidelines cases to Treasury. But SEC did not know the extent to which the procedures were being followed or whether eligible cases were not being referred. They explained that the attorneys would know the status of the cases assigned to them but that no agencywide information was available. They also told us that they expected all eligible cases to be referred to FMS and TOP eventually but noted that they had not prioritized the cases for referral or established time frames for referring them. Neither we nor SEC could determine with any certainty the extent to which eligible pre-guidelines cases were not being referred to FMS and TOP due to the unreliability of DPTS. Using DPTS, the only information available, we identified about 900 pre-guidelines cases valued at about $2.8 billion that were 180 days past due and that might be eligible for referral. As of January 31, 2003, almost 54 percent of these cases were over 3 years old based on their judgment date, which, in the absence of better data, we used as a rough proxy for the delinquency date. SEC officials emphasized that these numbers do not accurately reflect the number of pre-guidelines cases eligible for referral to FMS and TOP. They said that some of the cases were ineligible for referral because they were on appeal, in post-judgment litigation, or had a receiver appointed to marshal and distribute assets. In addition, many cases might already have been referred for collection. SEC officials also pointed out that our calculations of the age of cases were inaccurate because we relied on the judgment date rather than the delinquency date, which is not tracked in DPTS. We recognize that many factors affect the accuracy of DPTS, including some that might not be mentioned here. However, we are reporting these numbers as the best information available. Both GAO and SEC have recognized DPTS’s lack of reliability. Our 2002 disgorgement report and a January 2003 report commissioned by the SEC Inspector General found that DPTS was not complete and accurate and could not be relied upon for financial accounting and reporting purposes. Recognizing that the agency did not have a system that provided an accurate assessment of levied amounts and payments (among other things), SEC developed a draft action plan for implementing a new system to replace DPTS. The April 2003 draft plan calls for implementing a comprehensive centralized system for tracking, documenting, and reporting on fines and disgorgements ordered, paid, and disbursed in SEC enforcement actions. The agency had been taking steps to address the milestones in the plan. If the plan is effectively implemented, the agency should have a tool for accurately identifying uncollected pre-guidelines cases for referral to FMS and TOP for collection. SEC’s action plan has been divided into two phases. In the first phase, SEC is tentatively scheduled to replace DPTS by the end of fiscal year 2003. SEC officials described the replacement system as a comprehensive case tracking, record-keeping, and reporting system for fines and disgorgements ordered, paid, and distributed. They said that the system will be integrated with a database maintained by the Division of Enforcement. The replacement system is intended to, among other things, maintain the data on debt needed for general reporting and management purposes. According to SEC officials, one benefit of the replacement system will be to assist the agency in managing its delinquent cases. However, SEC will continue to rely on its new collections database, which tracks collection efforts on post-guidelines cases, to ensure the timely referral of these cases to FMS and TOP until phase two of the action plan is implemented. In phase two, SEC plans a comprehensive upgrade to its case tracking system, which will be integrated with several other databases, including the new collections database. SEC expects to begin the requirements analysis for the phase two computer system in fiscal year 2004 but has not established a milestone for completing this analysis. After the requirements analysis is complete, SEC plans to establish an implementation date for the system. We recommended in our July 2001 report that SEC continue to work with FMS to ensure that compromise offers presented by FMS are approved in a timely manner. Our recommendation resulted from a finding that SEC did not always respond to compromise offers promptly and that as a result some debts had never been collected. For example, we reported that FMS waited from between 42 and 327 days for SEC’s decisions on three compromise offers. But by the time SEC made its decisions, the debtors no longer had the money to pay the amounts specified in the compromise offers. To address this concern, in April 2001 FMS proposed securing delegation authority from SEC—that is, permission to approve compromise offers that SEC did not respond to within 30 days. In response to our recommendation, SEC took several steps to ensure that compromise offers are approved in a timely manner. First, in July 2001 SEC implemented procedures specifying the actions required to address a compromise offer, including a schedule to ensure that a decision is made within 30 days. For example, within 5 days of receiving an offer, SEC staff are to have made a final decision on whether to recommend the offer to the commission for approval. SEC also implemented controls to monitor the status of offers. When it receives a compromise offer from FMS, SEC enters the offer into a system that tracks information such as the date the offer was made, the name of the attorney reviewing the offer, the date the offer was referred to the commission for a final decision, and the date of the final decision. The Division of Enforcement’s chief counsel monitors the status of offers based on weekly reports generated from this system to ensure that follow-up action is taken to address any problems. Finally, SEC has designated two staff to respond to FMS inquiries about the status of compromise offers. It is still too early to determine the effectiveness of SEC’s actions. As of April 22, 2003, SEC had received four compromise offers from FMS under its new procedures. SEC and FMS data showed that SEC had responded to three of the offers within the 30-day guideline and to one offer within 40 days. The late offer represented a debt of $1.6 million, and the settlement offer was for $50,000. SEC staff told us that the agency ultimately rejected the offer, at least in part because of the disparity between the amount offered and the amount owed. SEC officials attributed the delay in responding to this offer to scheduling conflicts caused by the holiday season. The officials told us that the agency was in touch with FMS before the end of 30 days to indicate, on an informal basis, that the reply to the compromise offer would be delayed and that the offer would be rejected. FMS officials told us that they did not view SEC’s late response to this offer as a problem—that is, the delay did not represent weaknesses in agency policies, procedures, or controls. They said that SEC had shown marked improvement in responding to compromise offers and that as a result FMS was no longer seeking delegation authority from SEC. We recommended in our 2001 report that CFTC take steps to ensure that delinquent fines were promptly referred to FMS, including creating formal procedures that addressed both sending debts to FMS within the required time frames and requiring all of the necessary information from the Division of Enforcement on these debts. Our recommendation flowed from a finding in an April 2001 report by CFTC’s Inspector General showing that CFTC staff were not referring delinquent debts to FMS in a timely manner, potentially limiting FMS’s ability to collect the monies owed. The report also noted that CFTC’s collection procedures had not been updated to address referrals to FMS and, among other examples, identified a fine in the amount of $7 million that had not been referred to FMS for more than 2 years because of inadequate communication between CFTC’s Division of Enforcement and its Division of Trading and Markets. As we recommended, CFTC has improved its procedures for referring its debt to FMS in a timely manner and has taken steps to ensure that it has all the necessary enforcement information before making the referral. CFTC updated its collection procedures and implemented them in July 2002. They now include specific requirements for referring debt to FMS within 180 days of the date that the debt became delinquent. CFTC also implemented controls to ensure that it has identified all delinquent debt eligible for referral. For example, CFTC management reviews quarterly reports on the status of cases to ensure that all debts are referred to FMS within 180 days. According to CFTC officials, the agency’s shift of all debt collection responsibility from its Division of Trading and Markets to its Division of Enforcement streamlined its debt referral process. Although it is too early to fully assess the effectiveness of CFTC’s actions, our review of CFTC’s data on uncollected cases indicated that the agency had been referring all eligible debt to FMS within 180 days. As of April 24, 2003, CFTC had had four delinquent cases dating from the time its procedures went into effect. Using FMS’s data, we confirmed that the cases had been referred to FMS within 123 days. Also, a review of CFTC’s data of all delinquent cases levied before the procedures went into effect showed that CFTC had referred all eligible cases to FMS for collection. FMS officials told us that CFTC had been making debt referrals with complete information on all its cases. SEC and CFTC have taken steps to address our two recommendations for improving their oversight of SROs’ sanctioning practices. But SEC has not fully implemented our 1998 recommendation that it analyze industrywide data on SRO-imposed sanctions to examine disparities and help improve disciplinary programs. The agency has experienced technological problems that have hampered its ability to complete these analyses. In addition—and consistent with our 2001 recommendation—SEC and CFTC have been monitoring readmission applications to the securities and futures industries. However, at the time of our review neither had received any applications since changing their fine imposition practices. Also, SEC, CFTC, NASD, and NFA have controls designed to ensure that inappropriate readmissions do not occur. Further, while examining the application review process, we found weaknesses in controls over fingerprinting that could result in inappropriate admissions to the securities and futures industries. In our 1998 report, we recommended that SEC analyze industrywide information on disciplinary program sanctions, particularly fines, to identify possible disparities among the SROs and find ways to improve SROs’ disciplinary programs. We concluded that analyzing industrywide data could provide SEC with an additional tool to identify disparities among SROs that might require further review. We reported in 2001 that SEC had developed a database to collect information on SROs’ disciplinary actions. As of June 30, 2003, according to agency officials, SEC was still inputting information into its database but had not yet completed any analyses because technological difficulties had hampered its ability to collect sufficient data to perform the analyses. First, the database had a limited number of fields and therefore could not capture multiple disciplinary violations or multiple parties in a single case. In October 2002, SEC officials told us that they had addressed this limitation by enhancing the database to incorporate the required fields and were continuing to add disciplinary information to the database. However, in November 2002, the enhanced database failed because it could not support multiple users. SEC repaired the database, and agency officials told us that they expected to complete their first data analyses in the summer of 2003. The analyses are expected to show whether SROs impose similar fines and sanctions for similar violations. An SEC official said that the agency expects these analyses to supplement the information obtained during agency inspections of the SROs’ disciplinary programs. SEC officials told us that the agency is planning to use funds from its fiscal year 2003 budget increase to develop a new disciplinary database that will replace the current one. According to SEC officials, this new disciplinary database is expected to allow SROs to submit data on-line rather than having to send it to SEC to be entered by staff. This streamlined process is expected to reduce data entry errors. An SEC official told us that while planning had begun for the new disciplinary database, no completion date had been established. In our 2001 report, we recommended that SEC and CFTC periodically assess the pattern of readmission applications to ensure that the changes in NASD’s and NFA’s fine imposition practices do not result in any unintended consequences, such as inappropriate readmissions. NASD and NFA had stopped routinely assessing fines when barring individuals in October 1999 and December 1998, respectively, eliminating the related requirement that the fines be paid as a condition of reentry to the securities and futures industries. These fines had rarely been collected, because few violators ever sought reentry. We were concerned that because barred individuals were no longer required to pay a fine before reentry, they might be more willing to seek readmission. Consistent with our recommendation, SEC and CFTC have monitored readmission applications. They found, and we confirmed, that no individuals who were barred after the changes in NASD’s and NFA’s fine imposition practices had applied for reentry. Also, NASD’s and NFA’s application review processes included controls designed to ensure that inappropriate applications for reentry are not approved. Officials of both SROs told us that as part of their background checks they did a database search against the names of past and current registrants in both industries to determine whether the applicants had a disciplinary history. In addition, both SROs submitted applicants’ fingerprints to the Federal Bureau of Investigation (FBI) for a criminal background check. NASD and NFA required all individuals who had been suspended, expelled, or barred to be—at a minimum—sponsored by a registered firm before being considered for readmission. According to a CFTC official, finding a sponsor is difficult, as most firms would not hire an individual with a history of serious disciplinary problems, in part due to increased supervisory requirements and the risk of harming their reputations. SEC and CFTC were reviewing the applications of all individuals who had been statutorily disqualified from registration, including any barred individuals, and had the authority to reverse an admission decision made by NASD or NFA, respectively. SEC and CFTC officials told us that they would consider various factors when reviewing a readmission application, including the facts and circumstances of the case, the appropriateness of the proposed supervision, and the prospective employer’s ability to provide the proposed supervision. Officials from both agencies told us that if they were to begin receiving a large number of applications from barred applicants, they would reexamine the SROs’ fine imposition practices. While examining the application review process, we found that neither the related statutes, SEC, nor CFTC required the SROs to ensure that the fingerprints sent to the FBI for use in criminal history checks belonged to the applicants who submitted them. Further, in the absence of such a requirement, NASD, the New York Stock Exchange (NYSE), and NFA lacked related controls over fingerprinting, potentially allowing inappropriate persons to enter the securities and futures industries. The securities and futures laws require that applicants to these industries have their fingerprints taken and then sent for review to the FBI as part of a criminal background check. The goal of the criminal background check is to ensure that inappropriate individuals are not granted admission to the securities or futures industries. The statutes also require SRO member firms to be responsible for assuring that their personnel are fingerprinted. SEC and CFTC rules provide that applicants can satisfy this requirement by submitting fingerprints to the SROs who then send them to the FBI for processing. However, neither the statutes, SEC, nor CFTC require SROs to ensure that the fingerprints sent to the FBI for use in criminal history checks belong to the applicants who submitted them. In the absence of such a requirement, NASD, NYSE, and NFA have not imposed requirements on member firms to help ensure that the identity of the person being fingerprinted matches the fingerprints being submitted for FBI review. The SROs told us that, consistent with the law, they required their members to be fingerprinted and that these fingerprints were submitted to the FBI for assessment. NYSE officials emphasized that their members were in full compliance with the law and related regulations, which do not require specific controls. In the absence of specific requirements, firms have taken a variety of approaches to fingerprinting applicants. For example, while SEC and some SROs told us that most firms used their own personnel or police officers to obtain fingerprints, they said that a small number of firms may allow applicants to fingerprint themselves, a practice that provides an opportunity for individuals to perpetrate fraud by submitting someone else’s fingerprints instead of their own. According to SEC and CFTC officials, their agencies have trained staff in their headquarters and some regional offices that take fingerprints of their employees using approved fingerprinting kits. An NFA official also stated that NFA headquarters has trained staff that take fingerprints of industry applicants, verifying their identities as part of the process. The FBI also informed us that it suggests using law enforcement or other trained personnel to take fingerprints. SEC and NYSE also said that many reputable businesses provide fingerprinting services and that SRO member firms could contract with these businesses. In a 1996 CFTC review of NFA’s registration fitness program, CFTC recommended that NFA conduct a review to determine the feasibility of adopting controls to ensure that the fingerprints submitted for criminal history checks belonged to the applicant. NFA found that a number of obstacles stood in the way of establishing an effective program to verify fingerprints. According to an NFA official, the agency examined the procedures of the Bureau of Citizenship and Immigration Services of the Department of Homeland Security (formerly the Immigration and Naturalization Service) in responding to CFTC’s recommendation. On the basis of this examination, NFA concluded that it would not be cost- effective to replicate the bureau’s procedures. For example, unlike NFA, the bureau has fingerprinting sites throughout the country with trained employees to take fingerprints. As part of its review, NFA considered requiring an attestation form, which would include the fingerprinter’s name and address and the document used to verify the applicant’s identity. Ultimately, however, NFA concluded that such a form could be subject to forgery and would not provide assurance that the fingerprints belonged to the applicant. CFTC accepted NFA’s conclusions. NYSE and NFA officials described other obstacles to establishing controls over fingerprinting. They explained that space limitations on the FBI fingerprint card made it difficult to identify the person taking the fingerprints. Further, they said that the card provided space for the fingerprinter’s signature, which is often illegible, but not for the fingerprinter’s printed name or the name of another contact who could verify information related to the fingerprints. NYSE officials also said that the FBI could adjust its fingerprint card so that it required more complete contact information for the person taking the fingerprints. An NFA official also told us that because some SROs process registration applications both nationally and internationally, these SROs would not be able to establish enforceable rules regarding who should take fingerprints. We did not determine the extent to which individuals with a criminal history could submit someone else’s fingerprints and thus enter the securities or futures industries undetected. However, SEC and CFTC officials said that the SROs’ fingerprinting processes are vulnerable to such a practice because of the lack of controls for preventing applicants from using someone else’s fingerprints as their own. SRO officials said that existing systems were reasonably designed to prevent fraud but were not foolproof, adding that the potential cost of imposing any unduly restrictive requirements was a concern. Some SRO officials said that to the extent they are needed, SEC and CFTC should establish industrywide standards. NFA officials said that since weaknesses in fingerprinting procedures apply equally to the securities and futures industries, SEC and CFTC should establish comparable requirements to ensure that one industry is not at a disadvantage to the other. NYSE officials said that SEC rulemaking would be the most appropriate method for changes to fingerprinting procedures in the securities industry. To provide a more complete picture of efforts by securities and futures regulators to collect fines, we calculated the collection rates in two different ways. The collection rates for closed cases (cases with a final judgment order for which all collection actions were completed) for SEC,CFTC, and the SROs from January 1997 to August 2002 showed that the regulators collected most of the fines imposed. Broadening the analysis to include open cases (cases with a final judgment order that remained open while collection efforts continued) had the greatest impact on SEC’s and CFTC’s collection rates because of a few large uncollected fines. Our analysis of the collection rates highlights a theme introduced in an earlier report that the collection rate alone may not be a valid measure of the effectiveness of collection efforts, because collections can be influenced by factors that are outside regulators’ control. SEC, CFTC, and the SROs collected between 75 and 100 percent of all the fines imposed in closed cases. For these cases, collection efforts had ceased either because the fines had been collected in full or in part or were unlikely to be collected and thus had been written off as bad debts. As shown in table 1, SEC and CFTC collected about 94 and 99 percent, respectively, of the total dollars levied in cases closed from January 1997 through August 2002—the period immediately following the one covered in our 1998 fines report. These amounts represent an 11 and 18 percentage point increase, respectively, over the rates presented in the 1998 report, which covered the 1992–96 period. CFTC wrote off fewer fines as uncollectible in the more recent period, and almost all of its collected fines were paid in full. The eight securities and futures SROs for which data were available had the same or higher collection rates on closed cases in the most recent period compared with the earlier period. The Chicago Board of Trade’s collection rate showed significant improvement, increasing from 54 to 95 percent of the total dollars levied. Its collection rate for the 1992–96 period was heavily influenced by two large uncollected fines totaling $2.25 million. Excluding those two cases, the rate for this period would have been about 99 percent rather than 54 percent—much closer to the 95 percent rate for the more recent period. NASD’s and NFA’s rates also showed significant improvement, increasing 71 and 48 percentage points, respectively, over the rates presented in the 1998 report, which covered the 1992–96 period. However, NASD’s and NFA’s collection rates improved because, as we have noted, the regulators stopped routinely assessing fines when barring individuals from the securities and futures industry. These fines had been the most difficult to collect, because barred individuals had little incentive to pay them. SEC’s and CFTC’s collection rates were affected more than the SROs’ rates when we added open cases to our calculations. As shown in table 2, SEC collected about 40 percent of the total dollars levied in all cases, open and closed, from January 1997 through August 2002—54 percentage points less than its rate for closed cases. We examined SEC’s collection rates by year and found that the rates varied greatly over time because of a few large fines. (See appendix III for the collection rates of the securities regulators for open and closed cases by calendar year.) For example, in 1999 SEC collected 26 percent of the total fines levied in that year, but one uncollected fine of $123 million significantly lowered the rate. Had SEC been able to collect this one fine, its collection rate for 1999 would have been 89 percent (fig. 1). Also, in 2002, SEC collected 61 percent of all fines, but approximately half came from two payments made by two violators. Excluding these payments, the reported collection rate for 2002 would have been about 30 percent (fig. 1). To help control for the influence of large dollar amounts on SEC’s collection rates, we analyzed the number of cases paid in full and found that SEC had collected the full amount of the fine in the majority of cases it levied. For the entire period from 1997 through 2001, 72 percent of the fines levied had been paid in full. In 2002, 55 percent of the fines levied were paid in full. The rate may be lower for 2002 because SEC has had less time— approximately 4 months—to collect on cases levied through August 2002. CFTC collected about 45 percent of the total dollar amount of the fines it levied over the same period. Like SEC’s rate, CFTC’s was heavily influenced by a few large fines. A closer review of CFTC’s annual rates from January 1997 through August 2002 showed that the regulator collected between 2 and 90 percent of the total fines levied. (See appendix IV for the collection rates of the futures regulators for open and closed cases by calendar year.) But in 2000, when CFTC’s collection rate was just 2 percent, our calculations included a single uncollected fine of $90 million. Had CFTC been able to collect this one fine, its collection rate would have been 95 percent (fig. 2). Also, in 1998, when CFTC collected 90 percent of the total dollar amount levied through August 2002, one payment for $125 million heavily skewed the rate (fig. 2). Without this one payment and fine, CFTC’s reported collection rate would have been approximately 7 percent (fig. 2). To help control for the influence that large dollar amounts can have on the rate, we again analyzed the number of cases paid in full. Over the entire period of our study, from 1997 through August 2002, CFTC had collected the full amount in slightly more than 50 percent of the cases it levied. Although CFTC’s collection rates over the entire period of our study were relatively low, the agency was actively pursuing collections on all its uncollected cases, primarily through the Departments of Treasury and Justice. CFTC’s Chief of Cooperative Enforcement told us that the agency would continue to levy large fines when appropriate, even though large uncollectible amounts could reduce the agency’s collection rate. He said that levying fines that are commensurate with the related wrongdoing sends a message to the public that CFTC is serious about enforcing its statutes. The collection rates for the nine securities and futures SROs were comparable in both sets of calculations (see table 2). When we included open cases in our calculations, these SROs’ collection rates decreased slightly, with all but two (NASD’s and the New York Mercantile Exchange’s) declining between 1 and 9 percentage points. One reason for the relatively small decline was that these SROs generally had fewer and smaller uncollected fines, suggesting that they had been more successful in collecting on all cases than SEC and CFTC. According to an NFA official, one reason that the SROs that operate markets had higher collection rates was that in their role as exchanges they could sell a member’s “seat,” or membership, to pay off the fine, giving members an incentive to pay their fines. Because other regulators do not have this type of leverage, their rates are typically lower. NASD’s collection rate for closed cases was 95 percent and its rate for open and closed cases was 66 percent—a change of 29 percentage points. NASD’s rate for open and closed cases was affected by low collections in 1997 and 1998. As a result, the rates did not necessarily reflect the effects of the changes NASD made to its fine imposition practices in October 1999. As indicated in figure 3, NASD’s annual collection rates generally increased from January 1997 through December 2002. In 1997, NASD collected 26 percent of the total dollars invoiced. In 2002, it collected 96 percent—a 70 percentage point increase over 6 years. As we reported earlier, one of the primary reasons for the increases was a change in the way NASD imposes fines. Specifically, NASD stopped routinely assessing fines when barring an individual from the industry, reducing the number of fines it invoiced each year and improving its overall collection rate. Also, in calculating its rate, NASD excluded about $137 million in fines that would be due and payable only if the fined individuals were to reenter the securities industry. The New York Mercantile Exchange’s collection rate for open and closed cases was 83 percent—a decline of 17 percentage points from its closed case rate. When we excluded one uncollected $200,000 fine, the collection rate for open and closed cases declined by only 4 percentage points. Collection rates are the most widely available—and in some cases the only—measure of regulators’ success in collecting fines for violations of securities and futures laws. But external factors over which regulators have no control can skew these rates. Nonetheless, examining the rates and the factors influencing them can be a starting point for obtaining an understanding of regulators’ performance and changes to it. Also, in exploring these rates regulators can identify cases that account for a significant share of uncollected debts and decide whether continuing with collection efforts for these cases is worthwhile. Primary among the external factors affecting collection rates are the large fines and payments that we have been discussing. Just one or two extremely large uncollected fines can lower a collection rate significantly. Similarly, one or two large payments on such fines can raise a collection rate. Other external factors that can influence collection rates include violators’ ability to pay and the size of the fines themselves. For example, an SEC official said that some violators who have been barred from the industry cannot pay their fines because their earning capacity has been limited. In discussing CFTC’s relatively low collection rate, an agency official told us that the courts, in an attempt to match the gravity of the sanction to the offense, have sometimes imposed fines that are more than what an agency might realistically be able to collect. This official said that in one case, a court fined a company $90 million—triple the monetary gain from its illegal activities. He also said that in another case, a court assessed fines totaling $4 million against four violators, although CFTC had sought $660,000. Since our last report, SEC and CFTC have made material improvements to their policies and procedures for collecting delinquent fines that, if followed, should improve collections on debts owed to the federal government. Nonetheless, SEC lacks a formal strategy for collecting on its pre-guidelines delinquent debt. Although the probability of collecting monies ordered on older cases diminishes over time, some portion of these pre-guidelines cases may have collection potential that is being overlooked. Developing a formal strategy that prioritizes pre-guidelines cases based on their collection potential and establishes time frames for their referral to FMS and TOP would improve the likelihood of collecting some portion of the debt associated with these cases, which could be more than $1 billion. The success of SEC’s efforts to collect this debt will be closely related to the timely replacement of DPTS. Phase one of SEC’s action plan includes a tentative deadline for replacing DPTS by the end of fiscal year 2003. At that time, SEC will be able to identify all cases eligible for referral to FMS and TOP and develop a strategy for making these referrals. SEC has not yet set a milestone for completing the requirements analysis for phase two of its action plan or established a date to fully implement the computer system that will integrate SEC’s now separate databases. We are concerned that, without target dates, progress in implementing phase two could be slowed, affecting SEC’s ability to more efficiently address all cases that should be referred to FMS and TOP. Further, SEC’s progress has been slow in the 5 years since we recommended that the agency analyze industrywide information on SRO disciplinary program sanctions, in part because technological problems have hindered its ability to collect sufficient data to perform the analyses. SEC has not yet completed its first analysis and has no schedule for implementing the new disciplinary database intended to replace its current database. Finally, while controls were in place that should keep barred individuals from being readmitted to the securities and futures industries, neither the related statutes, SEC, or CFTC require the SROs to ensure that the fingerprints sent to the FBI for use in criminal history checks belong to the applicants who submit them. In the absence of such a requirement, the SROs lacked related controls that could help prevent inappropriate admissions to the securities and futures industries. SRO involvement in weighing alternatives for addressing fingerprinting requirements for the securities and futures industries would ensure that concerns about cost- effective solutions are appropriately considered and addressed. We recommend that the SEC Chairman develop a formal strategy for referring pre-guidelines cases to FMS and TOP that prioritizes cases based on collectibility and establishes implementation time frames; take the necessary steps to implement the action plan to replace DPTS by (1) meeting the fiscal year 2003 milestone for implementing phase one of the plan, (2) setting a milestone for completing the requirements analysis for phase two of the plan, and (3) establishing and meeting the implementation date for phase two; and analyze the data that have been collected on the SROs’ disciplinary programs, address any findings that result, and establish a time frame for implementing the new disciplinary database that is to replace the current database. We also recommend that SEC and CFTC work together and with the securities and futures SROs to address weaknesses in controls over fingerprinting procedures that could allow inappropriate persons to be admitted to the securities and futures industries. We requested comments on a draft of this report from the Chairmen, or their designees, of SEC and CFTC. SEC officials provided written comments, which are reprinted in appendix II. CFTC provided oral comments. In general, both agencies agreed with the facts we presented and also agreed to implement the recommendations we made. SEC emphasized that it expected to meet its milestone for implementing a replacement database for DPTS by the end of fiscal year 2003 and said that once the new system was in place, the agency would be able to identify delinquent debts that had not been referred to FMS and TOP and set deadlines for making referrals. While SEC said that further milestones for phase two of its action plan will be set at some time in the future, it made no reference to establishing a time frame for implementing its new disciplinary database. We believe that SEC needs to move quickly to set time frames for both of these projects, because in the absence of dates on which to focus, progress may be delayed. SEC also said that agency staff will contact CFTC to review the possibility of adopting new industrywide fingerprinting standards, including procedures to verify the identities of all individuals who are being fingerprinted. CFTC officials told us that they would work with SEC and the SROs to address our recommendation. Finally, we also received technical comments from SEC and CFTC that we incorporated into the report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs and its Subcommittee on Securities and Investment; the Chairman, House Committee on Energy and Commerce; the Chairman, House Committee on Financial Services and its Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises; and other interested congressional committees. We will send copies to the Chairman of SEC, the Chairman of CFTC, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site http://www.gao.gov. If you have any further questions, please call me at (202) 512-8678, dagostinod@gao.gov, or Cecile Trop at (312) 220-7705, tropc@gao.gov. Additional GAO contacts and staff acknowledgments are listed in appendix V. To evaluate SEC’s and CFTC’s actions to improve their collection programs, we assessed their responses to our 2001 recommendations that (1) SEC take steps to ensure that regulations allowing SEC fines to be submitted to TOP are adopted; (2) SEC continue to work with FMS to ensure that compromise offers presented by FMS are approved in a timely manner; and (3) CFTC take steps to ensure that delinquent fines are referred promptly to FMS, including creating formal procedures that address both sending debts to FMS within the required time frames and requiring all of the necessary information from the Division of Enforcement on these debts. To assess steps SEC took to ensure that regulations allowing SEC fines to be submitted to TOP were adopted, we reviewed SEC’s final regulations and related procedures and collection guidelines. To determine compliance with the new collection guidelines for referring delinquent cases to TOP, we selected a judgmental sample of 66 post-guidelines fines and disgorgement cases using DPTS and obtained information from SEC on the referral status of those cases. Of the 66 cases, four were eligible for referral at the time of our review. We selected cases where judgments or orders were entered after SEC’s guidelines took effect, because staff told us they were tracking the referral of those cases. To determine the number, dollar amount owing, and age of the delinquent cases at the agency, we identified all cases with ongoing collections, using DPTS data as of January 31, 2003, and calculated the age from the judgment date (which in the absence of better data, we used as a rough proxy for the delinquency date) to January 31, 2003. Since DPTS was unreliable, the aging analysis provides only a rough estimate of the total number and age of cases. We interviewed SEC and FMS officials to obtain their views on SEC’s progress in referring cases to FMS and TOP and information on any impediments to this progress. To assess SEC’s efforts to continue to work with FMS to ensure that compromise offers presented by FMS are approved in a timely manner, we examined SEC’s procedures for processing compromise offers. We obtained data from SEC on the four compromise offers FMS submitted to SEC between July 1, 2001, and April 22, 2003, and analyzed the length of time it took for SEC to respond to the compromise offers. We obtained and used FMS’s data to validate SEC’s response time. We also interviewed SEC and FMS officials to discuss SEC’s policies, procedures, and controls and to obtain information on the agencies’ efforts to work together to ensure the timely approval of offers. We also obtained FMS’s views on SEC’s progress in responding to offers. To assess steps CFTC took to ensure that delinquent fines are promptly referred to FMS, we reviewed CFTC’s collection procedures, which it calls instructions, to ensure that they included time frames for referring cases to FMS and provisions for obtaining all necessary enforcement information. We also reviewed related agency controls. To assess staff’s compliance with the revised procedures, we obtained data from CFTC on its only four delinquent cases and analyzed the length of time it took to refer them to FMS. We obtained and used FMS’s data to validate that all of CFTC’s cases have been transferred within 180 days. We also interviewed CFTC officials to discuss the agency’s procedures and controls and obtained FMS’s views on CFTC’s progress in referring fines. To assess SEC’s and CFTC’s efforts to enhance their oversight of the SROs’ sanctioning practices, we assessed their responses to our 1998 and 2001 recommendations that (1) SEC analyze industrywide information on disciplinary program sanctions, particularly fines, to identify possible disparities among the SROs and find ways to improve the SROs’ programs; and (2) SEC and CFTC periodically assess the pattern of readmission applications to ensure that the changes in NASD’s and NFA’s fine imposition practices do not result in any unintended consequences, such as inappropriate readmissions. To assess the status of SEC’s efforts to analyze industrywide information on SROs’ disciplinary program sanctions, we interviewed SEC officials to discuss the types of analyses planned, any obstacles encountered, and efforts to overcome those obstacles. To assess both SEC’s and CFTC’s efforts to periodically assess the pattern of readmission applications, we interviewed officials of these agencies to determine the number of readmission applications from barred individuals and reviewed documentation that described the controls used to keep barred applicants from reapplying. We focused our review on permanent bars and application records since NASD and NFA changed their fine imposition practices in October 1999 and December 1998, respectively. To validate both agencies’ statements that they had not reviewed any readmission applications from barred individuals since our 2001 report, we obtained the names of barred individuals from NASD and NFA and verified that each individual had not applied for readmission. Specifically, for NASD, we compared the names of over 900 barred applicants who had not been fined against a list of readmission applications. We focused on these individuals because of concerns that individuals who had been barred and not fined might be more willing to seek readmission than those who had been barred and fined. For NFA, we researched the histories of 32 barred individuals, using NFA’s database to validate that none of the individuals had applied for readmission. We examined all barred applicants, including both those who had been fined and those who had not been, because the data did not allow us to distinguish between these groups. To ensure that NFA’s and NASD’s data were sound, we interviewed agency officials to assess the controls these agencies had over their data systems, such as their processes for entering and updating data, safeguards for protecting the data against unauthorized changes, and any tests conducted to verify the accuracy and completeness of the data. We found that the data were useable for our purposes. To address concerns that surfaced during our review about controls over the fingerprinting procedures used in criminal history checks, we interviewed officials at NASD, NFA, NYSE, and the FBI and reviewed laws and regulations related to fingerprinting. In addition to NYSE, other SROs that operate markets have agreements with the FBI under which they may submit fingerprints to the FBI for criminal history checks. We limited our review to NYSE because it is the largest SRO that operates a market, and we wanted to determine how another SRO’s procedures might differ from those of NASD and NFA. To calculate the fines collection rates for SEC, CFTC, and nine securities and futures SROs for 1997 through 2002 (all years were calendar years), we focused on these regulators’ imposition and collection of fines through their enforcement and disciplinary programs. The nine SROs included the American Stock Exchange, the Chicago Board Options Exchange, the Chicago Board of Trade, the Chicago Mercantile Exchange, the Chicago Stock Exchange, NASD, NFA, the New York Mercantile Exchange, and NYSE. We excluded fines for minor rule infringements such as floor conduct, decorum, and record-keeping violations that normally do not undergo disciplinary proceedings. The exchanges generally referred to these violations as “traffic ticket” violations, and they are handled through summary proceedings and involve smaller fine amounts. We excluded amounts owed for disgorgement and restitution, except for NASD, because these sanctions are different from fines in that they are imposed to return illegally made profits or to restore funds illegally taken from investors. Due to the way NASD tracked its fines and payments, NASD was unable to exclude disgorgement amounts from its payment data. We also excluded fines that were not invoiced, because they would not be due unless the fined individual sought to reenter the securities industry. All other fines were factored into the rate, including fines dismissed in bankruptcy, to obtain the most complete view possible of the regulators’ efforts to discipline violators. To calculate annual fines collection rates and composite collection rates, we obtained and analyzed data from SEC, CFTC, and all SROs, except NASD, on fines levied from January 1997 through August 2002, and collected through December 2002. NASD’s data include fines invoiced from 1997 through 2002. We limited our review to fines levied through August 2002 to allow regulators through December 2002 (4 months) to attempt collections. We calculated the collection rate in two ways. First, we calculated the rate by including only closed cases—that is, cases with a final judgment order for which all collection actions were completed. This approach is consistent with the one used in our 1998 report. Second, to provide a more complete view of regulators’ collection activities, we calculated the rate using all closed and open cases—that is, cases with a final judgment order for which collections actions were completed and cases with a final judgment order that remained open while collection efforts continued. For cases with a payment plan, we adjusted the levy amount to the amount owed as of December 31, 2002, because a portion of the original levied amount was not yet due. We could not do this for SEC or NASD because agency data did not specify the amount owed as of December 31, 2002. As a result, SEC’s and NASD’s rate may be understated. We also used NASD’s calculations of its collection rates, because the design of NASD’s financial system did not allow us to calculate these rates with an acceptable degree of accuracy using the approach we applied to other SROs. First, according to NASD officials, NASD’s calculations used the date a fine was invoiced instead of the date it was levied. Fines were typically invoiced between 15 and 45 days after they were levied. This difference may have had a minor effect, particularly on the annual collection rates. Second, NASD’s collection rates represent the total amount collected up to December 31, 2002, on fines invoiced from January 1997 through December 2002 (as opposed to the August 31, 2002, date for the other SROs). Third, because NASD’s system could not identify cases on a payment plan, NASD’s calculations do not adjust the fine amount to the amount owing as of December 31, 2002, exerting a slight bias toward understating the collection rate. Fourth, NASD’s collection rates (1) include disgorgement because NASD was not able to separate such amounts from its payment data and (2) exclude fines that were levied but not invoiced because such fines were not due unless the fined individual sought to reenter the securities industry. We also assessed the reliability of the data provided by the 11 regulators by asking officials about agency controls for collecting fines and payment data, supervising data entry, safeguarding the data from unauthorized changes, and processing that data. We also asked whether they performed data verification and testing. Although the controls varied across the agencies, each one demonstrated a basic level of system and application controls. We also performed basic tests of the integrity of the data we received from some of the regulators that provided us with individual fines data. We concluded that the data from all of the organizations, except SEC, was sufficiently reliable for the purposes of this report. The number of errors we and SEC found in DPTS during the course of our work and the findings of the January 3, 2003, report to the SEC Inspector General that the data in DPTS were incomplete and inaccurate led us to conclude that DPTS fines data remain insufficiently reliable to calculate an accurate collection rate. While we cannot be sure of the magnitude or direction of the errors in the DPTS fines data, we are nevertheless reporting the number and dollar value of cases eligible for referral to FMS and TOP, the age of this debt, and SEC collection rates as the best estimates possible at this time. We did our work in accordance with generally accepted government auditing standards between August 15, 2002, and July 1, 2003. We performed our work in Boston, Mass.; Chicago, Ill.; New York, N.Y.; and Washington, D.C. We calculated the collection rates using data from SEC and the SROs, except for NASD, which calculated its own rates (see appendix I for further details). The rates are based on fines levied from January 1997 through August 2002 and include all amounts collected on those fines through December 2002, except for NASD. The fines data listed for each year represent collection activity on the fines levied in each of those years. Percentages were rounded to the nearest whole number. We calculated the collection rates using data from CFTC and the SROs. The rates are based on fines levied from January 1997 through August 2002 and include all amounts collected on those fines through December 2002. The fines data listed for each year represent collection activity on the fines levied in each of those years. Percentages were rounded to the nearest whole number. In addition to those named above, Emily Chalmers, Marc Molino, Carl Ramirez, Jerome Sandau, Michele Tong, Sindy Udell, and Anita Zagraniczny made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Collecting fines ordered for violations of securities and futures laws helps ensure that violators are held accountable for their offenses and may also deter future violations. The requesters asked GAO to evaluate the actions the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) have taken to address earlier recommendations for improving their collection programs. The committees also asked GAO to update the fines collection rates from previous reports. SEC and CFTC have improved their collection programs since GAO issued its 2001 fines report. While it was too early to fully assess the effectiveness of their actions, SEC could be doing more to maximize its use of Treasury's collection services. SEC has implemented regulations, procedures, collections guidelines, and controls for using the Treasury Offset Program (TOP), which applies payments the federal government owes to debtors to their outstanding debts. However, SEC has been focusing on referring to TOP those delinquent cases with amounts levied after its new collections guidelines went into effect. The agency has not developed a formal strategy for referring older cases, reducing the likelihood of collecting monies on what could be more than a billion dollars of delinquent debt. Further impeding collection efforts, SEC does not have a reliable system for tracking monies owed on these older cases and therefore could not determine which cases were not being referred to TOP. SEC has drafted an action plan for a new system to track all cases with a monetary judgment. Once the system is in place, the agency should have a tool for identifying all cases, including older delinquent cases that can be referred to TOP. However, SEC has not established a time frame for fully implementing the plan. GAO's calculations for closed cases (collection actions completed) showed that regulators' collection rates on fines imposed between 1997 and August 2002 equaled or exceeded those from 1992 to 1996. Recalculating the rates to include closed and open cases (collection actions ongoing) affected SEC's and CFTC's collection rates, primarily because of a few large uncollected fines.
History is a good teacher. To solve the problems of today, it is important to avoid repeating past mistakes. Over the past 12 years, the department has initiated several broad-based departmentwide reform efforts intended to fundamentally reform its financial operations as well as other key business areas, including the Defense Reform Initiative, the Defense Business Operations Fund, and the Corporate Information Management initiative. These efforts, which are highlighted below, have proven to be unsuccessful despite good intentions and significant effort. The conditions that led to these previous attempts at reform remain largely unchanged today. Defense Reform Initiative (DRI). In announcing the DRI program in November 1997, the then Secretary of Defense stated that his goal was “to ignite a revolution in business affairs.” DRI represented a set of proposed actions aimed at improving the effectiveness and efficiency of DOD’s business operations, particularly in areas that had been long-standing problems—including financial management. In July 2000, we reported that while DRI got off to a good start and made progress in implementing many of the component initiatives, it did not meet expected time frames and goals, and the extent to which savings from these initiatives would be realized was yet to be determined. We noted that a number of barriers had kept the department from meeting its specific time frames and goals. The most notable barrier was institutional resistance to change in an organization as large and complex as DOD, particularly in such areas as acquisition, financial management, and logistics, which transcend most of the department’s functional organizations and have been long-standing management concerns. We also pointed out that DOD did not have a clear road map to ensure that the interrelationships between its major reform initiatives were understood and addressed and that it was investing in its highest priority requirements. We are currently examining the extent to which DRI efforts begun under the previous administration are continuing. Defense Business Operations Fund. In October 1991, DOD established a new entity, the Defense Business Operations Fund by consolidating nine existing industrial and stock funds and five other activities operated throughout DOD. Through this consolidation, the fund was intended to bring greater visibility and management to the overall cost of carrying out certain critical DOD business operations. However, from its inception, we reported that the fund did not have the policies, procedures, and financial systems to operate in a businesslike manner. In 1996, DOD announced the fund’s elimination. In its place, DOD established four working capital funds. DOD estimated that for fiscal year 2003 these funds would account for and control about $75 billion. These new working capital funds inherited their predecessor’s operational and financial reporting problems. Our reviews of these funds have found that they still are not in a position to provide accurate and timely information on the results of operations. As a result, working capital fund customers cannot be assured that the prices they are charged for goods and services represent actual costs. Corporate Information Management (CIM). The CIM initiative began in 1989 and was expected to save billions of dollars by streamlining operations and implementing standard information systems to support common business operations. CIM was expected to reform all of DOD’s functional areas—including finance, procurement, material management, and human resources—through consolidating, standardizing, and integrating information systems. DOD also expected CIM to replace approximately 2,000 duplicative systems. Over the years, we made numerous recommendations to improve CIM’s management to help preclude the wasteful use and mismanagement of billion of dollars. However, these recommendations were generally not addressed. Instead, DOD spent billions of dollars with little sound analytical justification. Rather than relying on a rigorous decision-making process for information technology investments—as used in leading private and public organizations we studied, DOD made systems decisions without (1) appropriately analyzing cost, benefits, and technical risks; (2) establishing realistic project schedules; or (3) considering how business process improvements could affect information technology investments. For one effort alone, DOD spent about $700 million trying to develop and implement a single system for the material management business area— but this effort proved unsuccessful. We reported in 1997 that the benefits of CIM had yet to be widely achieved after 8 years of effort and spending about $20 billion. The CIM initiative was eventually abandoned. DOD’s long-standing financial management difficulties have adversely affected the department’s ability to control costs, ensure basic accountability, anticipate future costs and claims on the budget (such as for health care, weapon systems, and environmental liabilities), measure performance, maintain funds control, prevent fraud, and address pressing management issues. In this regard, I would like to briefly highlight three of our recent products that exemplify the adverse impact of DOD’s reliance on fundamentally flawed financial management systems and processes and a weak overall internal control environment. In March of this year, we testified on the continuing problems with internal controls over approximately $64 million in fiscal year 2001 purchase card transactions involving two Navy activities. Consistent with our testimony last July on fiscal year 2000 purchase card transactions at these locations, our follow-up review demonstrated that continuing control problems contributed to fraudulent, improper, and abusive purchases and theft and misuse of government property. We are currently auditing purchase and travel card usage across the department. In July 2001, we reported that DOD did not have adequate systems, controls, and managerial attention to ensure that the $2.7 billion of adjustments affecting closed appropriation accounts made during fiscal year 2000 were legal and otherwise proper. Our review of $2.2 billion of these adjustments found about $615 million (28 percent) of the adjustments should not have been made, including about $146 million that violated specific provisions of appropriations law and were thus illegal. For example, the stated purpose of one adjustment was to charge a $79 million payment made in February 1999 to a fiscal year 1992 research and development appropriation account to correct previous payment recording errors. However, the fiscal year 1992 research and development appropriation account closed at the end of fiscal year 1998—4 months before the $79 million payment was made. Therefore, the adjustment had the same effect as using canceled funds from a closed appropriation account to make the February 1999 expenditure, which is prohibited by the 1990 law. As of April 2002, DOD had reversed 140 of the 162 transactions and provided additional contract documentation for the remaining 22 transactions. However, DOD has yet to complete the reconciliation for the contracts associated with these adjustments and make the correcting entries. DOD has indicated that it will be later this year before the correct entries are made. In June 2001, we reported that DOD’s financial systems could not adequately track and report on whether the $1.1 billion in earmarked funds that the Congress provided to DOD for spare parts and associated logistical support were actually used for their intended purpose. The vast majority of the funds—92 percent—were transferred to the military services operation and maintenance accounts. Once the funds were transferred into the operation and maintenance accounts, the department could not separately track the use of the funds. As a result, DOD lost its ability to assure the Congress that the funds it received for spare parts purchases were used for, and only for, that purpose. Problems with the department’s financial management operations go far beyond its accounting and finance systems and processes. The department continues to rely on a far-flung, complex network of finance, logistics, personnel, acquisition, and other management information systems—80 percent of which are not under the control of the DOD Comptroller—to gather the financial data needed to support the day-to-day management decision making. This network was not designed to be, but rather has evolved into, the overly complex and error-prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces that combine to exacerbate problems with data integrity. Many of the department’s business operations are mired in old, inefficient processes and legacy systems, some of which go back to the 1950s and 1960s. For example, the department still relies on the Mechanization of Contract Administration Services (MOCAS) system—which dates back to 1968—to process a substantial portion of the contract payment transactions for all DOD organizations. In fiscal year 2001, MOCAS processed an estimated $78 billion in contract payments. Past efforts to replace MOCAS have failed and the current effort has been delayed. As a result, for the foreseeable future, DOD will continue to be saddled with MOCAS. In the 1970s, we issued numerous reports detailing serious problems with the department’s financial management operations. Between 1975 and 1981, we issued more than 75 reports documenting serious problems with DOD’s cost, property, fund control, and payroll accounting systems. In the 1980s, we found that despite the billions of dollars invested in individual systems, these efforts, too, fell far short of the mark, with extensive schedule delays and cost overruns. For example, our 1989 report on eight major DOD system development efforts—including two major accounting systems—under way at that time, showed that system development cost estimates doubled, two of the eight efforts were abandoned, and the remaining six efforts experienced delays of 3 to 7 years. Two recent specific system endeavors that have fallen short of their intended goals are the Standard Procurement System and the Defense Joint Accounting System. Both of these efforts were aimed at improving the department’s financial management and related business operations. Standard Procurement System (SPS). In November 1994, DOD began the SPS program to acquire and deploy a single automated system to perform all contract management-related functions within DOD’s procurement process for all DOD organizations and activities. The laudable goal of SPS was to replace 76 existing procurement systems with a single departmental system. DOD estimated that SPS had a life-cycle cost of approximately $3 billion over a 10-year period. According to DOD, SPS was to support about 43,000 users at over 1,000 sites worldwide and was to interface with key financial management functions such as payment processing. Additionally, SPS was intended to replace the contract administration functions currently performed by MOCAS. Our July 2001 report and February 2002 testimony before this Subcommittee identified weaknesses in the department’s management of its investment in SPS. Specifically: The department had not economically justified its investment in the program because its latest (January 2000) analysis of costs and benefits was not credible. Further, this analysis showed that the system, as defined, was not a cost-beneficial investment. The department was not accumulating actual program costs and therefore, did not know the total amount spent on the program to date, yet life-cycle cost projections had grown from about $3 billion to $3.7 billion. Although the department committed to fully implementing the system by March 31, 2000, this target date had slipped by over 3 ½ years to September 30, 2003, and program officials have recently stated that this date will also not be met. We recommended that the Secretary of Defense make additional investments in SPS conditional upon first demonstrating that the existing version of SPS is producing benefits that exceed costs and that future investment decisions, including those regarding operations and maintenance beyond fiscal year 2001, be based on complete and reliable economic justifications. Defense Joint Accounting System (DJAS). In 1997, DOD selected DJAS to be one of three general fund accounting systems. As originally envisioned, DJAS would perform the accounting for the Army and the Air Force as well as the DOD transportation and security assistance areas. Subsequently, in February 1998, Defense Finance and Accounting Service (DFAS) decided that the Air Force could withdraw from using DJAS. DFAS made the decision because either the Air Force processes or the DJAS processes would need significant reengineering to allow for the development of a joint accounting system. As a result, the Air Force was allowed to start development of its own general fund accounting system— General Fund and Finance System—which resulted in the development of a fourth general fund accounting system. In June 2000, the DOD Inspector General reported that DFAS was developing DJAS at an estimated life-cycle cost of about $700 million without demonstrating that the program was the most cost-effective alternative for providing a portion of DOD’s general fund accounting. More specifically, the report stated that DFAS had not developed a complete or fully supportable feasibility study, analysis of alternatives, economic analysis, acquisition program baseline, or performance measures, and had not reengineered business processes. According to data provided by DFAS, for fiscal years 1997-2000 approximately $120 million was spent on the development and implementation of DJAS. However, today DJAS is only being operated at two locations—Ft. Benning, Georgia, and the Missile Defense Agency. According to a DFAS official, DJAS is considered to be fully deployed—which means it is operating at all intended locations. Significant resources—in terms of dollars, time, and people—have been invested in these two efforts, without demonstrated improvement in DOD’s business operations. It is essential that DOD ensure that its investment in systems modernization results in more effective and efficient business operations, since every dollar spent on ill-fated efforts such as SPS and DJAS is one less dollar available for other defense spending priorities. As part of our constructive engagement approach with DOD, the Comptroller General met with Secretary Rumsfeld last summer to provide our perspectives on the underlying causes of the problems that have impeded past reform efforts at the department and to discuss options for addressing these challenges. There are four underlying causes: a lack of sustained top-level leadership and management accountability deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and inadequate incentives for seeking change. Historically, DOD has not routinely assigned accountability for performance to specific organizations or individuals who have sufficient authority to accomplish desired goals. For example, under the CFO Act, it is the responsibility of agency CFOs to establish the mission and vision for the agency’s future financial management. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The department has learned through its efforts to meet the Year 2000 computing challenge that to be successful, major improvement initiatives must have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense. In the Year 2000 case, the then Deputy Secretary of Defense was personally and substantially involved and played a major role in the department’s success. Such top- level support and attention helps ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes. A central finding from our report on our survey of best practices of world-class financial management organizations—Boeing; Chase Manhattan Bank; General Electric; Pfizer; Hewlett-Packard; Owens Corning; and the states of Massachusetts, Texas, and Virginia— was that clear, strong executive leadership was essential to (1) making financial management an entitywide priority, (2) redefining the role of finance, (3) providing meaningful information to decision makers, and (4) building a team of people that delivers results. DOD’s past experience has suggested that top management has not had a proactive, consistent, and continuing role in building capacity, integrating daily operations for achieving performance goals, and creating incentives. Sustaining top management commitment to performance goals is a particular challenge for DOD. In the past, the average 1.7-year tenure of the department’s top political appointees has served to hinder long-term planning and follow-through. Cultural resistance to change and military service parochialism have also played a significant role in impeding previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization, and that many of these practices were developed piecemeal and evolved to accommodate different organizations, each with its own policies and procedures. For example, as discussed in our July 2000 report, the department encountered resistance to developing departmentwide solutions under the then Secretary’s broad-based DRI. In 1997, the department established a Defense Management Council—including high-level representatives from each of the military services and other senior executives in the Office of the Secretary of Defense—which was intended to serve as the “board of directors” to help break down organizational stovepipes and overcome cultural resistance to change called for under DRI. However, we found that the council’s effectiveness was impaired because members were not able to put their individual military services’ or DOD agencies’ interests aside to focus on departmentwide approaches to long-standing problems. Cultural resistance to change has impeded reforms not only in financial management, but also in other business areas, such as weapon system acquisition and inventory management. For example, as we reported last year, while the individual military services conduct considerable analyses justifying major acquisitions, these analyses can be narrowly focused and do not consider joint acquisitions with the other services. In the inventory management area, DOD’s culture has supported buying and storing multiple layers of inventory rather than managing with just the amount of stock needed. DOD’s past reform efforts have been handicapped by the lack of clear, linked goals and performance measures. As a result, DOD managers lack straightforward road maps showing how their work contributes to attaining the department’s strategic goals, and they risk operating autonomously rather than collectively. In some cases, DOD had not yet developed appropriate strategic goals, and in other cases, its strategic goals and objectives were not linked to those of the military services and defense agencies. As part of our assessment of DOD’s Fiscal Year 2000 Financial Management Improvement Plan, we reported that, for the most part, the plan represented the military services’ and defense components’ stovepiped approaches to reforming financial management and did not clearly articulate how these various efforts would collectively result in an integrated DOD-wide approach to financial management improvement. In addition, we reported that the department’s plan did not include performance measures that could be used to assess DOD’s progress in resolving its financial management problems. DOD officials have informed us that they are now working to revise the department’s approach to this plan so that future years’ updates will reflect a more strategic, departmentwide vision and provide a more effective tool for financial management reform. As it moves to modernize its systems, the department faces a formidable challenge in responding to technological advances that are changing traditional approaches to business management. For fiscal year 2003, DOD’s information technology budgetary request of approximately $26 billion will support a wide range of military operations as well as DOD business functions. As we have reported, while DOD plans to invest billions of dollars in modernizing its financial management and other business support systems, it does not yet have an overall blueprint—or enterprise architecture—in place to guide and direct these investments. As we recently testified, our review of practices at leading organizations showed they were able to make sure their business systems addressed corporate—rather than individual business unit—objectives by using enterprise architectures to guide and constrain investments. Consistent with our recommendation, DOD is now working to develop a financial management enterprise architecture, which is a very positive development. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business-as-usual” processes, systems, and structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD generally measures its performance by the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement changed behavior have been minimal or nonexistent. Secretary Rumsfeld perhaps said it best in announcing his planned transformation at DOD: “There will be real consequences from, and real resistance to, fundamental change.” This lack of incentive has perhaps been most evident in the department’s acquisition area. In DOD’s culture, the success of a manager’s career has depended more on moving programs and operations through the DOD process than on achieving better program outcomes. The fact that a given program may have cost more than estimated, taken longer to complete, and not generated results or performed as promised was secondary to fielding a new program. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide and congressional goals; (2) develop incentives that motivate decision makers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or pulling the plug on a system or program that is failing; and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource-allocation decisions. Recognizing the need for improved financial data to effectively manage the department’s vast operations, Secretary Rumsfeld commissioned an independent study to recommend a strategy for financial management improvements. The report recognized that the department would have to undergo “a radical financial management transformation” and that it would take more than a decade to achieve. The report also noted that DOD’s current financial, accounting, and feeder systems do not provide relevant, reliable, and timely information. Further, the report pointed out that the “support of management decision-making” is generally not an objective of the financially based information currently developed or planned for future development. Additionally, the report stated that although the department had numerous system projects underway, they were narrowly focused, lacked senior management leadership, and were not part of an integrated DOD-wide strategy. The report also noted that the systemic problems discussed were not strictly financial management problems and could not be solved by DOD’s financial community. Rather, the solution would require the “concerted effort and cooperation of cross-functional communities throughout the department.” The report recommended an integrated approach to transform the department’s financial operations. The report noted that its proposed framework would take advantage of certain ongoing improvement actions within the department and provide specific direction for a more coordinated, managed, and results-oriented approach. The proposed course of action for transforming the department’s financial management centered around six broad elements: (1) leadership, (2) incentives, (3) accountability, (4) organizational alignment, (5) changes in certain rules, and (6) changes in enterprise practices. The report referred to its approach as a “twin-track” course of action. The first track employs a DOD-wide management approach to developing standard integrated systems; obtaining relevant, reliable, and timely financial data; and providing incentives for the department to utilize financial data in an efficient and effective way. This track will require a longer time frame and will include establishing a centralized oversight process under the DOD Comptroller for incrementally implementing the recommended structural changes and developing standard, integrated financial systems. The second track focuses on targeting, selecting, and overseeing implementation of a limited number of intraservice/cross-service projects for major cost savings or other high-value benefits under a process led by the DOD Comptroller and assisting the Secretary of Defense in establishing and managing a set of metrics. Prime tools of such improvements would include activity-based costing and benchmarking/best practices analysis to identify cost-saving opportunities. A July 19, 2001, departmental memorandum from Secretary Rumsfeld confirmed that the department needs to develop and implement an architecture for achieving integrated financial and accounting systems in order to generate relevant, reliable, and timely information on a routine basis. Secretary Rumsfeld further reiterated the need for a fundamental transformation of DOD in his “top-down” Quadrennial Defense Review. Specifically, his September 30, 2001, Quadrennial Defense Review Report concluded that the department must transform its outdated support structure, including decades-old financial systems that are not well interconnected. The report summed up the challenge well in stating: “While America’s businesses have streamlined and adopted new business models to react to fast-moving changes in markets and technologies, the Defense Department has lagged behind without an overarching strategy to improve its business practices.” Our experience has shown there are several key elements that collectively would enable the department to effectively address the underlying causes of its inability to resolve its long-standing financial management problems. For the most part these elements are consistent with those discussed in the department’s April 2001 financial management transformation report. These elements, which we believe are key to any successful approach to financial management reform, include addressing the department’s financial management challenges as part of a comprehensive, integrated, DOD-wide business reform; providing for sustained leadership by the Secretary of Defense and resource control to implement needed financial management reforms; establishing clear lines of responsibility, authority, and accountability for such reform tied to the Secretary; incorporating results-oriented performance measures and monitoring tied to financial management reforms; providing appropriate incentives or consequences for action or inaction; establishing and implementing an enterprise architecture to guide and direct financial management modernization investments; and ensuring effective oversight and monitoring. Actions on many of the key areas central to successfully achieving desired financial management and related business transformation goals— particularly those that rely on longer term systems improvements—will take a number of years to fully implement. Secretary Rumsfeld has estimated that his envisioned transformation may take 8 or more years to complete. Our research and experience with other federal agencies have shown that this is not an unrealistic estimate. Additionally, these keys should not be viewed as independent actions, but rather, a set of interrelated and interdependent actions that are collectively critical to transforming DOD’s business operations. Consequently, both long-term actions focused on the Secretary’s envisioned business transformation and short-term actions focused on improvements within existing systems and processes will be critical going forward. Short- term actions in particular will be critical if the department is to achieve the greatest possible accountability over existing resources and more reliable data for day-to-day decision making while longer term systems and business process reengineering efforts are under way. Beginning with the Secretary’s recognition of a need for a fundamental transformation of the department’s business operations and building on some of the work begun under past administrations, DOD has taken a number of positive steps in many of these key areas, but these steps are only a beginning. Challenges remain in each of these key areas that are formidable. As we previously reported, establishing the right goal is essential for success. Central to effectively addressing DOD’s financial management problems will be the recognition that they cannot be addressed in an isolated, stovepiped, or piecemeal fashion separate from the other high-risk areas facing the department. Further, successfully reforming the department’s operations—which consist of people, business processes, and technology—will be critical if DOD is to effectively address the deep- rooted organizational emphasis on maintaining business-as-usual across the department. Financial management is a crosscutting issue that affects virtually all of DOD’s business areas. For example, improving its financial management operations so that they can produce timely, reliable, and useful cost information is essential to effectively measure its progress toward achieving many key outcomes and goals across virtually the entire spectrum of DOD’s business operations. At the same time, the department’s financial management problems—and, most importantly, the keys to their resolution—are deeply rooted in and dependent upon developing solutions to a wide variety of management problems across DOD’s various organizations and business areas. For example, we have reported that many of DOD’s financial management shortcomings were attributable in part to human capital issues. The department does not yet have a strategy in place for improving its financial management human capital. This is especially critical in connection with DOD’s civilian workforce, since DOD has generally done a much better job in conjunction with human capital planning for its military personnel. In addition, DOD’s civilian personnel face a variety of size, shape, skills, and succession- planning challenges that need to be addressed. As we mentioned earlier, and it bears repetition, the department has reported that an estimated 80 percent of the data needed for sound financial management comes from its other business operations, such as its acquisition and logistics communities. DOD’s vast array of costly, nonintegrated, duplicative, and inefficient financial management systems is reflective of its lack of an integrated approach to addressing management challenges. DOD has acknowledged that one of the reasons for the lack of clarity in its reporting under the Government Performance and Results Act has been that most of the program outcomes the department is striving to achieve are interrelated, while its management systems are not integrated. As we discussed previously, the Secretary of Defense has made the fundamental transformation of business practices throughout the department a top priority. In this context, the Secretary established a number of top-level committees, councils, and boards, including the Senior Executive Committee, Business Initiative Council, and Defense Business Practices Implementation Board. The Senior Executive Committee was established to help guide efforts across the department to improve its business practices. This committee—chaired by the Secretary of Defense, and with membership to include the Deputy Secretary, the military service secretaries, and the Under Secretary of Defense for Acquisition, Technology and Logistics—was established to function as the “board of directors” for the department. The Business Initiative Council— comprising the military service secretaries and headed by the Under Secretary of Defense for Acquisition, Technology and Logistics—was established to encourage the military services to explore new money- saving business practices to help offset funding requirements for transformation and other initiatives. Our research of successful public and private sector organizations shows that such entities, comprised of enterprisewide executive leadership, provide valuable guidance and direction when pursuing integrated solutions to corporate problems. Inclusion of the department’s top leadership should help to break down the cultural barriers to change and result in an integrated DOD approach for business reform. The department’s successful Year 2000 effort illustrated, and our survey of leading financial management organizations captured, the importance of strong leadership from top management. As we have stated many times before, strong, sustained executive leadership is critical to changing a deeply rooted corporate culture—such as the existing “business-as-usual” culture at DOD—and to successfully implementing financial management reform. In the case of the Year 2000 challenge the personal, active involvement of the Deputy Secretary of Defense played a key role in building entitywide support and focus. Given the long-standing and deeply entrenched nature of the department’s financial management problems— combined with the numerous competing DOD organizations, each operating with varying, often parochial views and incentives—such visible, sustained top-level leadership will be critical. In discussing their April 2001 report to the Secretary of Defense on transforming financial management, the authors stated that, “unlike previous failed attempts to improve DOD’s financial practices, there is a new push by DOD leadership to make this issue a priority.” Strong, sustained executive leadership—over a number of years and administrations—will be key to changing a deeply rooted culture. In addition, given that significant investments in information systems and related processes have historically occurred in a largely decentralized manner throughout the department, additional actions will likely be required to implement centralized information technology investment control. In our May 2001 report we recommended that DOD take action to provide senior departmental commitment and leadership through establishment of a enterprisewide steering committee sponsored by the Secretary, that could guide development of a transformation blueprint and provide for centralized control over investments to ensure funding is provided for only those proposed investments in systems and business reforms that are consistent with the blueprint. Absent such a control, DOD runs the serious risk that the fiscal year 2003 information technology budgetary request of approximately $26 billion and future years’ requests will not result in marked improvement in DOD’s business operations. Without such an approach, DOD runs the risk of spending billions of dollars on systems modernization, which continues to perpetuate the existing systems environment that suffers from duplication of systems, limited interoperability, and unnecessarily costly operations and maintenance and will preclude DOD from achieving the Secretary’s vision of improved financial information on the results of departmental operations. Additionally, as previously discussed, the tenure of the department’s top political appointees has generally been short in duration and as a result it is sometimes difficult to maintain the focus and momentum that is needed to resolve the management challenges facing DOD. The resolution of the array of interrelated business system management challenges previously discussed is likely to require a number of years and therefore span several administrations. The Comptroller General has proposed in congressional testimony that one option to consider to address the continuity issue would be the establishment of the position of chief operating officer. This position could be filled by an individual appointed for a set term of 5 to 7 years with the potential for reappointment. Such an individual should have a proven track record as a business process change agent for large, diverse organizations and would spearhead business process transformation across the department. Last summer, when the Comptroller General met with Secretary Rumsfeld, he stressed the importance of establishing clear lines of responsibility, decision-making authority, and resource control for actions across the department tied to the Secretary as a key to reform. As we previously reported, such an accountability structure should emanate from the highest levels and include the secretary of each of the military services as well as heads of the department’s various major business areas. The Secretary of Defense has taken action to vest responsibility and accountability for financial management modernization with the DOD Comptroller. In October 2001, the DOD Comptroller, as previously mentioned, established the Financial Management Modernization Executive and Steering Committees as the governing bodies that oversee the activities related to this modernization effort and also established a supporting working group to provide day-to-day guidance and direction in these efforts. DOD reports that the executive and steering committees met for the first time in January 2002. At the request of the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, we are initiating a review of the department’s efforts to develop and implement an enterprise architecture. As part of the effort, we will be assessing the department’s efforts to align current investments in financial systems with the proposed architecture. It is clear to us that the DOD Comptroller has the full support of the Secretary and that the Secretary is committed to making meaningful change. The key is to translate this support into a funding control mechanism that ensures DOD’s components information technology investments are aligned with the department’s strategic blueprint. Addressing issues such as centralization of authority for information systems investments and continuity of leadership is critical to successful business transformation. To make this work, it is important that the DOD Comptroller have sufficient authority to oversee the investment decisions in order to bring about the full, effective participation of the military services and business process owners across the department. As discussed in our January 2001 report on DOD’s major performance and accountability challenges, establishing a results orientation is another key element of any approach to reform. Such an orientation should draw upon results that could be achieved through commercial best practices, including outsourcing and shared servicing concepts. Personnel throughout the department must share the common goal of establishing financial management operations that not only produce financial statements that can withstand the test of an audit but more importantly, routinely generate useful, reliable, and timely financial information for day- to-day management purposes. In addition, we have previously testified that DOD’s financial management improvement efforts should be measured against an overall goal of effectively supporting DOD’s basic business processes, including appropriately considering related business process system interrelationships, rather than determining system-by-system compliance. Such a results-oriented focus is also consistent with an important lesson learned from the department’s Year 2000 experience. DOD’s initial Year 2000 focus was geared toward ensuring compliance on a system-by-system basis and did not appropriately consider the interrelationships of systems and business areas across the department. It was not until the department, under the direction of the then Deputy Secretary, shifted to a core mission and function review approach that it was able to achieve the desired result of greatly reducing its Year 2000 risk. Since the Secretary has established an overall business process transformation goal that will require a number of years to achieve, going forward it is especially critical for managers throughout the department to focus on specific metrics that, over time, collectively will translate to achieving this overall goal. It is important for the department to refocus its annual accountability reporting on this overall goal of fundamentally transforming the department’s financial management systems and related business processes to include appropriate interim annual measures for tracking progress toward this goal. In the short term, it is important to focus on actions that can be taken using existing systems and processes. It is critical to establish interim measures to both track performance against the department’s overall transformation goals and facilitate near-term successes using existing systems and processes. The department has established an initial set of metrics intended to evaluate financial performance, and it reports that it has seen improvements. For example, with respect to closed appropriation accounts, DOD reported during the first 6 months of fiscal year 2002 a reduction in the dollar value of adjustments to closed appropriation accounts of about 80 percent from the same 6-month period in fiscal year 2001. Other existing metrics concern cash and funds management, contract and vendor payments, and disbursement accounting. We are initiating a review of DOD’s short-term financial management performance metrics and will provide the Subcommittee the results of our review. DOD also reported that it is working to develop these metrics into higher-level measures more appropriate for senior management. We agree with the department’s efforts to expand the use of appropriate metrics to guide its financial management reform efforts. Another key to breaking down the parochial interests and stovepiped approaches that have plagued previous reform efforts is establishing mechanisms to reward organizations and individuals for behaviors that comply with DOD-wide and congressional goals. Such mechanisms should be geared to providing appropriate incentives and penalties to motivate decision makers to initiate and implement efforts that result in fundamentally reformed financial management and other business support operations. In addition, such incentives and consequences are essential if DOD is to break down the parochial interests that have plagued previous reform efforts. Incentives driving traditional ways of doing business, for example, must be changed, and cultural resistance to new approaches must be overcome. Simply put, DOD must convince people throughout the department that they must change from business-as-usual systems and practices or they are likely to face serious consequences, organizationally and personally. Enterprise architecture development, implementation, and maintenance are a basic tenet of effective information technology management. Used in concert with other information technology management controls, an architecture can increase the chances for optimal mission performance. We have found that attempting to modernize operations and systems without an architecture leads to operational and systems duplication, lack of integration, and unnecessary expense. Our best practices research of successful public and private sector organizations has similarly identified enterprise architectures as essential to effective business and technology transformation. Establishing and implementing a financial management enterprise architecture is essential for the department to effectively manage its modernization effort. The Clinger-Cohen Act requires major departments and agencies to develop, implement, and maintain an integrated architecture. As we previously reported, such an architecture can help ensure that the department invests only in integrated, business system solutions and, conversely, will help move resources away from non-value- added legacy business systems and nonintegrated business system development efforts. Without an enterprise architecture to guide and constrain information technology investments, DOD runs the serious risk that its system efforts will perpetuate the existing system environment that suffers from systems duplication, limited interoperability, and unnecessarily costly operations and maintenance. In our May 2001 report, we pointed out that DOD lacks a financial management enterprise architecture to guide and constrain the billions of dollars it plans to spend to modernize its financial management operations and systems. According, we recommended that the department develop and implement an architecture in accordance with DOD’s policies and guidance and that senior management be involved in the investment decision-making process. DOD has awarded a contract for the development of a DOD-wide financial management enterprise architecture to “achieve the Secretary’s vision of relevant, reliable and timely financial information needed to support informed decision-making.” In fiscal year 2002, DOD received approximately $98 million and has requested another $96 million for fiscal year 2003 for this effort. Consistent with the recommendations contained in our January 1999 and May 2001 reports, DOD has begun an extensive effort to document the department’s current “as-is” financial management architecture by identifying systems currently relied upon to carry out financial management operations throughout the department. To date, the department has identified over 1,100 systems that are involved in the processing of financial information. In developing the “as-is” environment DOD has recognized that financial management is broader than just accounting and finance systems. Rather, it includes the department’s budget formulation, acquisition, inventory management, logistics, personnel, and property management systems. In developing and implementing its enterprise architecture, DOD needs to ensure that the multitude of systems efforts currently underway are designed as an integral part of the architecture. As discussed in our May 2001 report, the Army and the Defense Logistics Agency (DLA) are investing in financial management solutions that are estimated to cost about $700 million and $900 million, respectively. Further, the Naval Audit Service has reported that the Navy has efforts underway which are estimated to cost about $2.5 billion. These programs—commercial enterprise resource planning (ERP) products—are intended to implement different commercially available products for automating and reengineering various operations within the organization. Among the functions that these ERP programs address is financial management. However, since DOD has yet to develop and implement its architecture, there is no assurance that these separate efforts will result in systems that are compatible with the DOD designated architecture. For example, the Naval Audit Service reported that there are interoperability problems with the four Navy ERP efforts and the entire program lacks appropriate management oversight. The effort to develop a financial management architecture will be further complicated as the department strives to develop multiple architectures across its various business areas and organizational components. For example, in June 2001, we recommended that the DLA develop an architecture to guide and constrain its Business Systems Modernization acquisition. Additionally, we recommended that the department develop a DOD-wide logistics management architecture that would promote interoperability and avoid duplication among the logistics modernization efforts now under way in DOD component organizations, such as DLA and the military services. As previously discussed, control and accountability over investments are critical. DOD can ill-afford another CIM, which invested billion of dollars but did not result in systems that were capable of providing DOD management and the Congress with more accurate, timely, and reliable information of the results of the department’s vast operations. To better control DOD’s investments we recommended in our May 2001 report, that until the architecture is developed investments should be limited to (1) deployment of systems that have already been fully tested and involve no additional development or acquisition cost, (2) stay-in-business maintenance needed to keep existing systems operational, (3) management controls needed to effectively invest in modernized systems, and (4) new systems or existing system changes that are congressionally directed or are relatively small, cost effective, and low risk and can be delivered in a relatively short time frame. Ensuring effective monitoring and oversight of progress will also be key to bringing about effective implementation of the department’s financial management and related business process reform. We have previously testified that periodic reporting of status information to department top management, the Office of Management and Budget (OMB), the Congress, and the audit community is another key lesson learned from the department’s successful effort to address its Year 2000 challenge. Previous submissions of the department’s Financial Management Improvement Plan have simply been compilations of data call information on the stovepiped approaches to financial management improvements received from the various DOD components. It is our understanding that DOD plans to change its approach and anchor the plan in the enterprise architecture. If the department’s future plans are upgraded to provide a departmentwide strategic view of the financial management challenges facing the department, along with planned corrective actions, these plans can serve as an effective tool not only to help guide and direct the department’s financial management reform efforts, but also to help maintain oversight of the department’s financial management operations. Going forward, this Subcommittee’s oversight hearings, as well as the active interest and involvement of the defense appropriations and authorization committees, will continue to be key to effectively achieving and sustaining DOD’s financial management and related business process reform milestones and goals.
The Department of Defense (DOD) faces complex financial and management problems that are deeply rooted in DOD's business operations and management culture. During the past 12 years, DOD has begun several broad-based departmentwide reform efforts to overhaul its financial operations and other key business areas. These efforts have been unsuccessful. GAO identified several key elements that are essential to the success of any DOD financial management reform effort. These include (1) addressing the department's financial management challenges as part of a comprehensive, integrated, DOD-wide business reform; (2) providing for sustained leadership and resource control to implement needed reforms; (3) establishing clear lines of responsibility, authority, and accountability for such reform; (4) incorporating results-oriented performance measures and monitoring tied to the reforms; (5) providing appropriate incentives or consequences for action or inaction; (6) establishing and implementing an enterprise architecture to guide and direct financial management and modernization investments, and (7) ensuring effective oversight and monitoring.
In 2002, GAO reported that the number of restatement announcements due to financial reporting fraud and/or accounting errors grew significantly between January 1997 and June 2002, negatively impacting the restating companies’ market capitalization by billions of dollars. GAO was asked to update key aspects of its 2002 report (GAO-03-138). This report discusses (1) the number of, reasons for, and other trends in restatements; (2) the impact of restatement announcements on the restating companies’ stock prices and what is known about investors’ confidence in U.S. capital markets; and (3) regulatory enforcement actions involving accounting- and audit-related issues. To address these issues, GAO collected restatement announcements meeting GAO’s criteria, calculated and analyzed the impact on company stock prices, obtained input from researchers, and analyzed selected regulatory enforcement actions. While the number of public companies announcing financial restatements from 2002 through September 2005 rose from 3.7 percent to 6.8 percent, restatement announcements identified grew about 67 percent over this period. Industry observers noted that increased restatements were an expected byproduct of the greater focus on the quality of financial reporting by company management, audit committees, external auditors, and regulators. GAO also observed the following trends: (1) cost- or expense- related reasons accounted for 35 percent of the restatements, including lease accounting issues, followed in frequency by revenue recognition issues; and (2) most restatements (58 percent) were prompted by an internal party such as management or internal auditors. In the wake of increased restatements, SEC standardized disclosure requirements by requiring companies to file a specific item on the Form 8-K when a company’s previously reported financials should no longer be relied upon. However, between August 2004- September 2005, about 17 percent of the companies GAO identified as restating did not appear to file the proper disclosure when they announced their intention to restate. These companies continued to announce intentions to restate previous financial statements results in a variety of other formats. Public confidence in the reliability of financial reporting is critical to the effective functioning of the securities markets, and various federal laws and entities help ensure that the information provided meets such standards. Federal securities laws help to protect the investing public by requiring public companies to disclose financial and other information. SEC was established by the Securities Exchange Act of 1934 (the Exchange Act) to operationalize and enforce securities laws and oversee the integrity and stability of the market for publicly traded securities. SEC is the primary federal agency involved in accounting requirements for publicly traded companies. Under Section 108 of the Sarbanes-Oxley Act, SEC has recognized the accounting standards set by the Financial Accounting Standards Board (FASB)—generally accepted accounting principles (GAAP)—as ”generally accepted” for the purposes of the federal securities laws. SEC reviews and comments on registrant filings and issues interpretive guidance and staff accounting bulletins on accounting matters. To issue securities for trading on an exchange, a public company must register the securities offering with SEC, and to register, the company must meet requirements set by the Exchange Act, as amended, including the periodic disclosure of financial and other information important to investors. The regulatory structure of U.S. markets is premised on a concept of corporate governance that makes officers and directors of a public company responsible for ensuring that the company’s financial statements fully and accurately describe its financial condition and the results of its activities. Company financial information is publicly disclosed in financial statements that are to be prepared in accordance with standards set by FASB and guidance issued by SEC. The integrity of these financial statements is essential if they are to be useful to investors and other stakeholders. In addition to the requirements and standards previously discussed, the securities acts and subsequent law set requirements for annual audits of the financial statements by registered public accounting firms to help ensure the integrity of financial statements. The applicable standards under these laws require that auditors plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement. An audit includes an examination, on a test basis, of evidence supporting the amounts and disclosures in the financial statements; an assessment of the accounting principles used and significant estimates made by management; and an evaluation of the overall financial statement presentation. The purpose of the auditor’s report is to provide reasonable assurance about whether the financial statements present fairly, in all material respects, the financial position of the company, the results of its operations, and its cash flows, in conformity with U.S. GAAP. The Sarbanes-Oxley Act reinforces principles and strengthens requirements (established in previous law), including measures for improving the accuracy, reliability, and transparency of corporate financial reporting. Specifically, Section 302 requires that the chief executive officer (CEO) and chief financial officer (CFO) must certify for each annual and quarterly report filed with SEC that they have reviewed the report; the report does not contain untrue statements or omissions of a material fact; and the financial information in the report is fairly presented. In addition, Section 404 requires company management to annually (1) assess its internal control over financial reporting and report the results to SEC and (2) have a registered public accounting firm attest to and report on management’s assessment of effectiveness of internal control over financial reporting. While larger public companies have implemented Section 404, most companies with less than $75 million in public float— about 60 percent of all public companies—have yet to complete this process. (See app. IV for further discussion of the act.) To oversee the auditing of publicly traded companies, the Sarbanes-Oxley Act established PCAOB, a private-sector nonprofit organization. Subject to SEC oversight, PCAOB sets standards for, registers, and inspects the independent public accounting firms that audit public companies and has the authority to conduct investigations and disciplinary proceedings and impose sanctions for violations of law or PCAOB rules and standards. Specifically, Section 105 of the Sarbanes-Oxley Act granted PCAOB broad investigative and disciplinary authority over registered public accounting firms and persons associated with such firms. In May 2004, SEC approved PCAOB’s rules implementing this authority. According to the rules, PCAOB staff may conduct investigations concerning any acts or practices, or omissions to act, by registered public accounting firms and persons associated with such firms, or both, that may violate any provision of the act, PCAOB rules, the provisions of the securities laws relating to the preparation and issuance of audit reports and the obligations and liabilities of accountants with respect thereto, including SEC rules issued under the act, or professional standards. Furthermore, PCAOB’s rules require registered public accounting firms and their associated persons to cooperate with PCAOB investigations, including producing documents and providing testimony. The rules also permit PCAOB to seek information from other persons, including clients of registered firms. See figure 1 for the existing system of corporate governance and accounting oversight structures. Although the number of public companies restating their publicly reported financial information due to financial reporting fraud and/or accounting errors remained a relatively small percentage of all publicly listed companies, the number of restatements has grown since 2002. For example, 314 companies announced restatements in 2002 and 523 announced restatements in 2005 (through September). In addition, of the 1,390 announced restatements we identified, the percentage of large companies announcing restatements has continued to grow since 2002.While large and small companies restate their financial results for varying reasons, change in cost- or expense-related items, which includes lease accounting issues, was the most frequently cited reason for restating. While both internal and external parties could prompt restatements, internal parties such as company management or internal auditors prompted the majority of restatement announcements. Finally, we found that, despite SEC’s efforts to create a more transparent mechanism for disclosing restatements through revisions to Form 8-K, some companies had not properly filed such disclosures and continued to announce intentions to restate previous financial statements results in a variety of other formats. The number of annual announcements of financial restatements generally increased, from 314 in 2002 to 523 in 2005 (through September)—an increase of approximately 67 percent (see fig. 2). This constituted a nearly five-fold increase from 92 in 1997 to 523 in 2005. Furthermore, from July 2002 through September 2005, a total of 1,121 public companies made 1,390 restatement announcements. Some industry observers noted that several factors may have prompted more U.S. publicly traded companies to restate previously reported financial results, including (1) the financial reporting requirements of the Sarbanes-Oxley Act, especially the certification of financial reports required by Section 302 and the internal controls provisions of Section 404; (2) increased scrutiny from the newly formed PCAOB through its inspections of registered public accounting firms; and (3) increased staffing and review by SEC. As the number of restatement announcements rose, the numbers of listed companies making the announcements increased as well. While the average number of companies listed on NYSE, Nasdaq, and Amex decreased about 10 percent from 7,144 in 2002 to 6,473 in 2005, the number of listed companies restating their financial results increased from 265 in 2002 to 439 in 2005 (through September), representing about a 67 percent increase (see table 1). On a yearly basis, the proportion of listed companies restating grew from 3.7 percent in 2002 to 6.8 percent in 2005. Over the period of January 1, 2002, through September 30, 2005, the total number of restating companies (1,084) represents 16 percent of the average number of listed companies from 2002 to 2005, as compared to almost 8 percent during the 1997-2001 period. A number of other researchers also found that restatements had increased since calendar year 2002. The researchers used somewhat different search methodologies to identify companies that restate previously reported financial information and included slightly different criteria for inclusion but arrived at similar conclusions. The Huron Consulting Group (HCG) identified 1,067 financial statement restatements from 2002 to 2004 and noted that the increase was significant from 2003 to 2004. Also, Glass, Lewis & Co. LLC (Glass Lewis) identified 2,319 restatements of previously issued financial information by U.S. public companies from 2003 to 2005 and also found an increase in the number of restatements over that period. Unlike our work, which included a limited number of companies traded OTC Bulletin Board or on Pink Sheets, the Glass Lewis study also included hundreds of smaller companies quoted on the OTC Bulletin Board or on Pink Sheets that generally lacked analyst coverage. See appendix VI for a comparison of various restatements studies. For the restatements we identified, the number of large companies announcing restatements of their previously reported financial information due to financial reporting fraud and/or accounting errors has increased. More specifically, large companies (i.e., companies having over $1 billion in total assets), as a percentage of the total restating companies have increased from about 30 percent in 2001 to over 37 percent in 2005. Likewise, the average market capitalization of a company announcing a restatement (for which we had data) has grown from under $3 billion (with a median of $277 million) in the latter half of 2002 to over $10 billion (with a median of $682 million) through September 2005. While the average size of listed companies increased about 68 percent from 2002 to 2005, the average size of companies restating their financials grew almost 300 percent. Another indication that large public companies announcing restatements has continued to increase, is the number of companies identified as announcing restatements that are listed on the NYSE, which has more large companies than the other U.S. stock exchanges. For example, between 2002 and September 2005, the number of NYSE-listed companies announcing restatements had increased 64 percent from 114 to 187. During the same time, the number of Nasdaq-listed companies announcing restatements increased 55 percent from 137 to 212, and the number of Amex-listed restating companies increased more than 175 percent from 14 to 40. While more Nasdaq-listed companies announced restatements than NYSE- listed companies, the proportion of NYSE-listed companies restating (relative to the total number of companies listed on the NYSE) surpassed Nasdaq-listed companies over the period 2002–2005. As figure 3 illustrates, for the announced restatements we identified, in 2002, about 4 percent of NYSE-listed companies announced restatements for financial reporting fraud and/or accounting errors, whereas this percentage rose to more than 7 percent by September 2005. During the same period, the percentage of Nasdaq-listed restating companies rose from less than 4 percent to almost 7 percent. From 2002 to 2005, the percentage of NYSE- and Nasdaq-listed companies restating essentially mirrored each other in movement throughout the period by declining and then increasing. However, the percentage of Amex-listed restating companies rose each year during the 2002 to September 2005 period from about 2.0 percent to almost 5.5 percent. Although public companies restate their financial results for a variety of reasons, cost- or expense-related issues accounted for more than one-third of the 1,390 restatement announcements identified from July 2002 through September 2005 (see fig. 4). We classified cost- or expense-related restatements generally to include a company understating or overstating costs or expenses, improperly classifying expenses, or any other mistakes or improprieties that led to misreported costs. Lease accounting issues that surfaced in early 2005 were also included in this category. Our analysis also shows a significant drop in restatements announced for revenue recognition reasons, which had accounted for almost 38 percent of the restatements in our 2002 report. Cost- or expense-related issues surpassed revenue recognition issues as the most frequently identified cause of restatements primarily because of a large number of announcements made in early 2005 to correct accounting for leases by the retail/restaurant industry and tax-related issues. For example, 135 public companies announced restatements involving issues solely related to accounting for leases in 2005 after SEC chief accountant’s February 7, 2005, letter regarding the treatment of certain leases and leasehold improvements. However, revenue recognition remained the second most frequently identified reason for restatements from July 2002 through September 2005, accounting for 20 percent of all the restatements. Actions that we classified under “revenue recognition” included a company recognizing revenue sooner or later than would have been allowed under GAAP, or recognizing questionable or invalid revenue. (See table 2 for a description of each reason.) While both internal and external parties—such as the restating company’s management or internal auditor, an external auditor, SEC, or others—can prompt restatements, about 58 percent of the 1,390 announced restatements were prompted by internal parties. This was an increase from about 49 percent in our 2002 report. However, in both our prior report and this report, external parties may have been involved in discovering some of these misstatements, even if the companies may not have made that information clear in their restatement announcements or SEC filings. The external auditor, SEC, or some other external party such as the media (as in the case of an August 2002 restatement announcement by AOL Time Warner Inc. (AOL)), was identified as prompting the restatement in 24 percent of the announcements (compared to 16 percent in our 2002 report). In the remaining 18 percent of the announcements (compared with 35 percent in our 2002 report), we were not able to determine who prompted the restatement because the announcement or SEC filing did not clearly state who discovered the misstatement of the company’s prior financial results. SEC has revised Form 8-K, in part, to make information on financial restatements more uniform and apparent to investors, but many companies appeared to have filed potentially deficient filings. In addition, conflicting instructions and guidance resulted in some companies disclosing similar financial information in varying degrees and formats. In a 2003 report required by the Sarbanes-Oxley Act, SEC proposed to address the lack of uniformity by amending several of its periodic disclosure forms— essentially to make issuers’ public notification of financial information uniform. Specifically, in its report, SEC proposed to amend Form 8-K to add a specific line item for public companies to disclose what was restated and why. In March 2004, consistent with its proposal in the 2003 report, SEC amended Form 8-K to, among other things, add a new line item (Item 4.02), which requires public companies to file the Form 8-K (Item 4.02) within 4 business days if management or the company’s independent auditors determine that previously issued financial statements should not be relied upon. This alerts investors to potentially important company events that may impact their investment decision. This change became effective August 23, 2004. This change to Form 8-K included a limited safe harbor for failure to timely file an 8-K in certain situations, including in a situation in which the company makes the determination the financial statements may not be relied upon, but not in a situation when the independent auditor makes such a determination. In November 2004, SEC issued additional guidance to address questions concerning the revised disclosures. This “Frequently Asked Questions” guidance states that a Form 8-K is required for Item 4.01 (Change in Accountant) and Item 4.02 events, even if a periodic report such as a Form 10-K or 10-Q disclosing such information is filed during the 4 business days following the event. The amended forms and the amended rules do not make this Form 8-K filing requirement clear, and instead indicate that the filing of a Form 8-K may not be required if previously reported. Specifically, the instructions for Form 8-K state that a public company is not required to file a Form 8-K when, substantially the same information has been previously disclosed on a periodic report. Between August 23, 2004, and September 30, 2005, about 17 percent of restating companies (111 companies) did not appear to file a Form 8-K for restatements as required by SEC guidance. According to our analysis, about 30 percent of restating companies (34 companies), during this same time period, failed to file a Form 8-K disclosing their restatements. It appears that these companies either failed to disclose the announced restatement at all or disclosed it in a Form 10-K or 10-Q or an amended form. The remaining 77 companies filed a Form 8-K disclosing their restatement, but under items other than the required 4.02—such as 2.02 (Results of Operations and Financial Condition) or 8.01 (Other Events). Furthermore, we found that the companies filing these potentially deficient filings included a mix of large and small companies. For example, over one- third of the 111 companies we identified were large companies (as measured by market capitalization, asset size, or revenue). Moreover, a study by Glass Lewis found that about one-third of companies restating in calendar year 2005 did not file a Form 8-K (Item 4.02) to notify investors, or the public in general, about such a corporate event. We estimated that—from the trading day before through the trading day after an initial restatement announcement—stock prices of the restating companies decreased by an average of almost 2 percent, compared with an average decline of nearly 10 percent in our 2002 report. In addition, we estimated that the market capitalization of restating companies decreased by over $36 billion when adjusted for overall market movements (nearly $18 billion unadjusted) compared to adjusted and unadjusted declines of around $100 billion reported in 2002. These declines, while potentially significant for the investors involved, if realized, represented about 0.2 percent of the total market capitalization of the three securities exchanges, which was about $17 trillion in 2005. The reasons for restatements also appear to have affected the severity of the impact on market capitalization, with restatements for reasons that could involve financial reporting fraud or other unspecified causes resulting in the most severe size-adjusted market reaction on average. However, revenue issues continued to have a sizeable impact and, in a change from our previous report, cost- or expense-related restatements had the greatest impact in dollar terms because there are more of them. We also found that the market impact of restatement announcements on restating companies over longer periods was mixed, in contrast to our prior report, in which we found larger, more persistent stock price and market capitalization declines for restating companies. We estimated that, for the 1,061 cases we were able to analyze from July 1, 2002, to September 30, 2005, the stock prices of companies making an initial restatement announcement fell by almost 2 percent (market- adjusted), on average, from the trading day before through the day after the announcement (the immediate impact). Unadjusted losses in the market capitalization of restating companies totaled nearly $18 billion, ranging from a net gain of almost $9 billion from July through December 2002 to a loss of about $16 billion for 2004 (see table 3). But, when the losses were adjusted for general movements in the overall market, the market capitalization of the restating companies decreased an estimated $36 billion. In our prior report, we found that the immediate impact was an average decline in stock price of nearly 10 percent and a decline, both adjusted and unadjusted, in market capitalization of around $100 billion. Thus, in total, the immediate impact for July 2002–2005 appeared to be less severe. The smaller average decline in stock price (a 2 percent decline compared with a nearly 10 percent decline) suggested that the market’s reaction for each company, on average, was not as severe. On an annual basis, and when not adjusted for market movements, in the current report the average annual decline was $5.4 billion, compared with $18.2 billion, in our 2002 report. However, when market-adjusted, the average decline was $11.2 billion over the analysis period for this report, compared with an average $17.4 billion decline for the period covered in our prior report. The increased severity of the market-adjusted immediate impact on market capitalization likely reflected the more negative reaction to a restatement announcement given the generally positive overall market movement during the 2003–2005 period, and could also reflect the fact that more, larger companies announced restatements in the July 2002–2005 period. The immediate impact on the market capitalization of restating companies, while potentially large for the investors involved, if realized, generally was less than 0.2 percent of the total market capitalization of companies listed on NYSE, Nasdaq, and Amex for a given year during 2002–2005, ranging in magnitude from 0.01 percent to 0.14 percent (see table 4). That the immediate impact—as a percentage of total market capitalization—would appear relatively small is not surprising, considering the short trading day interval that we analyzed. We chose the 3-trading-day window to focus as much as possible on the restatement announcement, to the exclusion of other factors. Later in this report, we examine losses over longer periods, as well as the effects of restatements on overall market confidence. While our analysis generally showed declines in market capitalization, the results for the second half of 2002 were positive and can be explained in large part by the influence of two large companies—Tyco International Ltd. (Tyco) and AOL. The market reactions to the restatement announcements of the two companies resulted in adjusted market capitalization gains of $4.5 billion. In the cases of Tyco and AOL, both of which involved revenue recognition issues, the restatement announcements came weeks or months after initial news of potential accounting fraud and errors surfaced, and so the market had likely already anticipated these announcements and factored the information into the companies’ stock prices well before the restatement announcement. Over the 3 trading days surrounding the announcement dates that we identified, Tyco’s market capitalization increased by around $2.8 billion and AOL’s market capitalization increased by around $1.6 billion. We also conducted a separate analysis of the immediate impact of restatement announcements for the 329 announcements that we were unable to analyze in the primary event study. This group included 159 announcements that were attributed to companies with stock not listed on the exchanges. We limited this additional analysis to a simple assessment of the unadjusted change in market capitalization over the three trading days surrounding the restatement announcement, generally relying on data we obtained from SEC’s and Nasdaq’s Web sites. We were able to gather sufficient data to analyze 242 of the 329 announcements (114 announcements made by listed companies and 128 announcements made by unlisted companies). We estimated that, on average, these restatement announcements resulted in an average decline in market capitalization of 1.5 percent from the trading day before the announcement through the trading day after the announcement, reflecting an unadjusted decline of about $3.7 billion in addition to the nearly $18 billion decline estimated in the primary event study. Announcements made for reasons that could involve financial reporting fraud or other unspecified causes, which we classified in the Other category, as well as restructuring and revenue recognition-related issues, had the largest negative impact on market capitalization when adjusted for the size of a restating company (see fig. 6); however, when measured in dollars, cost- or expense-related restatement announcements accounted for more of the immediate decline in market capitalization than each of the other reasons, over our analysis period. These results are different from the findings in our earlier report, suggesting that the nature of the market response to restatements may have changed in some respects. (We discuss how different types of restatements may have affected investor confidence in another section of this report.) To assess the immediate market impact of a given type of restatement on a restating company’s market capitalization, we computed the ratio of the estimated change in the company’s market capitalization to the company’s total market capitalization over the 3 trading days surrounding the announcement of a restatement. We then averaged these impacts for each reason. While restatement announcements involving related-party transactions, which can revolve around revenue issues, appeared to have the largest negative impact, this result was not statistically different from zero. This category accounted for a relatively small number of restatements, and the results were heavily influenced by three announcements that had sizeable market reactions. In contrast to our previous report, in which positive responses to two large restatements attributed to restructuring, asset impairment, and inventory issues led to market gains in that category, restatements made for these reasons in 2002–2005 represented about 29 percent of the market-adjusted market capitalization losses. These reasons accounted for about 11 percent of the cases we analyzed, and the median size of a company restating for these reasons was $504 million. The effect of restatements announced for revenue recognition issues on market capitalization initially appeared weaker than in our previous report. Restatements involving revenue recognition accounted for almost 20 percent of the cases, but only around 10 percent of the market-adjusted market capitalization losses. The median size of a company restating for this reason was $321 million; thus it appears that companies announcing restatements for revenue recognition reasons tended to be smaller. However, when adjusted by the size of the restating company, restatement announcements involving revenue recognition issues (more than many other reasons) resulted in an average loss that represented a larger percentage of a restating company’s market capitalization. Cost- or expense-related restatements had a greater effect on market capitalization than in our previous report, and were distinguished from restatements for other reasons in three ways. First by dollars, cost- or expense-related restatement announcements accounted for more of the immediate declines in market capitalization than other reasons over our analysis period. More specifically, cost- or expense-related restatement announcements accounted for $15.2 billion, or about 42 percent, of the $36.5 billion in total losses (market-adjusted) over our analysis period. This decline was driven in large part by the January 9, 2004, restatement announcements by Shell Transport and Trading Company, plc, and Royal Dutch Petroleum Company, which represented a decline in estimated market capitalization attributed to the cost- or expense category of over $4 billion. Second, when measured by median market capitalization, companies announcing restatements involving cost or expense issues were the largest. The median size of a company restating for cost or expense reasons was $632 million. Furthermore, of the 1,061 cases analyzed, cost or expense was the most frequently cited reason for restating (38 percent). Finally, the market did not perceive all restatements negatively. We found that announcements involving the acquisition and merger category—with a median company size of $318 million—resulted in an overall increase of over $1.5 billion in market capitalization. The positive results are in significant contrast to our previous report, in which we attributed more than $19 billion in market capitalization decline to this category. Our analysis of restatement announcements showed mixed results over intermediate and longer periods, but these announcements overall tended to have some longer-term impacts. On a market-adjusted basis, from 20 trading days before through 20 trading days after a restatement announcement (the intermediate impact), we estimated that the stock prices of restating companies declined by nearly 2 percent on average, and their market capitalization declined by over $78 billion in aggregate; whereas, on an unadjusted basis, the market capitalization of restating companies decreased around $5 billion (see table 5). This suggests that the reaction was more negative than expected given the movement in the overall market. On a market-adjusted basis, from 60 trading days before through 60 trading days after the announcement (the longer-term impact), we estimated that the stock prices of restating companies decreased by less than 2 percent on average and their market capitalization decreased by over $126 billion in aggregate (see table 6). Unadjusted, the longer-term impact was an increase of about $34 billion in the market capitalization of restating companies. In our 2002 report, we estimated that the unadjusted market capitalization of restating companies that we analyzed decreased by close to $240 billion from 60 trading days before through 60 trading days after the announcement. This large difference may be the result of the generally positive overall market movement during 2003–2005, an increased number of restatements that the market did not view negatively, or the possibility that the financial markets have grown increasingly less sensitive to restatement announcements since 2002. As we considered longer event time frames, this increased the possibility that other factors and events may have affected a restating company’s stock price. Nevertheless, expanding the event window beyond the immediate trading days around the restatement announcement date allowed us to assess the longer-term impact of restatement announcements. The longer time frame also allowed us to capture any impact from earlier company announcements, which may have signaled restatements (for example, a company’s CFO departing a company suddenly, its outside audit firm resigning, or the notice of an internal or SEC investigation at the company). With such events, investors may sense that more negative news is forthcoming and drive the company’s stock price lower. For example, speculation about potential accounting problems at AOL first appeared publicly in mid-July 2002 in The Washington Post; however, it was not until mid-August that the company announced that it would restate. Our immediate impact analysis around the August 14, 2002, announcement date revealed a sizeable positive impact. However, our intermediate impact analysis showed that the market reacted negatively to the release of the news over this event window. Finally, our analysis only attempts to control for overall market movements, and so for these longer periods we cannot adjust for other factors such as company-specific news unrelated to the restatement. For example, several weeks after announcing a restatement a company could win a lucrative contract or be the target of an acquisition, both of which would likely have a positive impact on its stock price. We subsumed the impacts of any additional, unrelated events that occurred during this time period, which would attribute them to the restatement announcement. Appendix I provides additional details about these measures, along with information about their limitations. Although researchers generally agree that restatements can have a negative effect on investor confidence, the surveys and indexes of investor confidence that we reviewed did not indicate definitively whether investor confidence increased or decreased since 2002. Researchers noted several reasons for the inconclusive results about the effects of restatements on investor confidence. For example, some researchers have noted that, since 2002, investors may have had more difficulty discerning whether a restatement represented a response to aggressive or abusive accounting practices, constituted remediation of past accounting deficiencies, or merely represented technical adjustments. Furthermore, investor confidence remains difficult to quantify because it cannot be measured directly and because investors consider a variety of factors when making investment decisions. However, we identified several survey-based indexes that use a variety of methods to measure investor confidence; we also identified empirical work by academics and financial industry experts. A periodic UBS/Gallup survey-based index aimed at gauging investor confidence found that, although investor confidence remains low, accounting issues appear to be of less concern. In contrast, according to the Yale index, which asks a different set of questions, institutional investors have had slightly more confidence in the stock market since 2002; the index produced uncertain results for individual investors. Although researchers generally have agreed that restatement announcements could send unfavorable messages about restating companies to the capital markets, an analyst with whom we spoke expressed less agreement about the causes and effects of restatement announcements on investors (and investor confidence) since 2002. While we found some evidence in our 2002 report that suggested that restatement announcements prior to July 2002 may have led to widespread concerns about the perceived unreliability of financial reports, the impact of restatements since July 2002 on investor confidence has been more uncertain because the driving forces behind the increase in restatements have been less clear. For example, some analysts have suggested that investors may not have been able to discern whether restatements since 2002 represented a response to: aggressive or abusive accounting practices, the complexity of accounting standards, the remediation of past accounting deficiencies, or just technical adjustments. Some analysts indicated that the increase in the restatements is a serious problem with negative consequences on investor confidence. Other analysts have said that restatements might have minimal (or positive) effects on confidence if investors saw them as a remediation of accounting problems existing prior to the passage of the Sarbanes-Oxley Act, recognizing some restatements as the expected byproduct of a greater focus on the quality of financial reporting by management, audit committees, external auditors, and regulators since 2002. Although accounting issues discovered at one company could cause capital market participants to reassess the credibility of financial statements issued by other companies, researchers also noted the absence, so far, of large numbers of restatements that represent deliberate violations of GAAP—the same kind of restatements many believed produced widespread effects on investor confidence in 2001 and 2002 (e.g., Enron, WorldCom, and Adelphia). In that vein, others noted that the Sarbanes-Oxley Act, the collapse of Arthur Andersen, and perceived litigation risks have encouraged more conservative approaches that resulted in restatements to correct small errors or technical adjustments that likely were irrelevant to investors. Some believed that at least a portion of the restatements since 2002 have resulted from excessive complexity in accounting principles or the second-guessing of legitimate judgment calls that did not appear relevant to the valuations of the companies involved. One expert expressed concern that restatements may have lost their salience to market participants because they now occur so frequently, while others noted that investor confidence would be negatively affected if the number of restatements did not decline in the near future. Directly measuring the effect of restatements on investor confidence remains difficult because so many factors go into any investment decision and the reasons for restatements, which can affect investor response, often are unclear. However, we have highlighted results from two respected survey-based indexes of investor confidence, obtained from UBS Americas, Inc. and the International Center for Finance at the Yale School of Management. The UBS Index has been acknowledged for its accuracy and timeliness, and the Yale School of Management Indexes are considered to be the longest-running effort to measure investor confidence. The UBS/Gallup Index of Investor Optimism suggests that investor confidence remains well below the March 2002 level, when investor optimism had started to rebound following the Enron scandal. As shown in figure 7, according to the survey, concerns about accounting practices and corporate governance started to affect investor optimism, which were heightened following the WorldCom restatement announcement in June 2002. The index continued to decline until March 2003 when it reached an all-time low of 5—mirroring a similar decline in stock markets. However, in April, the survey indicates that investors where becoming more confident in the U.S. economic recovery throughout most of 2003. By January 2004, the index was back up to 108 before experiencing another steep decline by September 2005 as markets reacted, in part, to a sharp increase in energy prices. While the Index increased over the reminder of 2005, it remained below the March 2002 level through the second quarter of 2006. These trends are consistent with various proxies for investor confidence. For example, since April 2003, net new cash flows to equity mutual funds have been positive. And, according to “Barron’s Confidence Index,” investor confidence returned to its historical average by mid-2004 and— despite a decline in investor confidence in 2005—has remained above its lows in 2002 and 2003. While the 2002 and 2003 surveys reported that the leading concern expressed by investors was the negative impact of questionable accounting practices on the market, in 2005 and 2006, investors identified a number of other reasons as more significant for the decline in investor optimism. The major reasons cited for the decline were (1) the price of energy, including gas and oil; (2) the outsourcing of jobs to foreign countries; (3) the federal budget deficit; (4) the situation in Iraq; and (5) the economic impact of Hurricane Katrina and other storms. While some of these reasons reflect current events, others consistently were viewed as less important than accounting issues in the 2002, 2003, and 2004 surveys. However, it should be noted that accounting issues continue to be viewed as more important than a variety of other forces affecting the investment climate such as expectations regarding inflation, the value of dollar, and the threat of more terrorist attacks. While a significant portion of all investors surveyed continue to believe that accounting issues were negatively affecting the market, according to the UBS/Gallup survey the percent of investors feeling this way has decreased (see fig. 8). While 91 percent of all investors surveyed in 2002 felt that accounting issues were negatively impacting the market, about 71 percent felt that way in May 2006. Moreover, the percentage of investors indicating that accounting issues were hurting the investment climate in the United States “a lot” fell from 80 percent in July 2002 to 39 percent in May 2006. The results of UBS surveys were consistent with the findings of the Securities Industry Association’s (SIA) annual investor surveys. SIA found that, although accounting at U.S. corporations was still a major concern among investors in 2004, concern had declined significantly from 2002. Moreover, in 2004, investors seemed more concerned with the political environment and the state of the U.S. economy than accounting fraud and corporate governance issues. What has happened since 2004 is unclear because no survey was conducted for 2005. However, a newer index, the State Street Investor Confidence Index, which attempts to measure investors’ risk appetite by measuring the percent of risky assets investors hold in their portfolio, found that investor confidence remained relatively unchanged throughout 2005 and into 2006. In our 2002 report, we noted the Yale Indexes suggested significantly different impacts of restatements on investor confidence than the UBS/Gallup Index of Investor Optimism; however, since the 2003, the differences in the indexes have become less significant. The International Center for Finance at the Yale School of Management calculates four indexes that are based on survey questions directed to both wealthy individual and institutional investors. Although the indexes do not all move in the same direction over time, or even approximately so, the indexes generally show a small improvement in institutional investor confidence over the value for June 2002, but a slight decline in individual investor confidence with one exception. Some of the Yale indexes show pronounced volatility in short-term confidence. In fact, there were periods during 2003, 2004, and 2005, where some measures of confidence declined significantly before rebounding in 2006. Although these confidence indexes did not directly measure the impact of restatements on investor confidence, they illustrate the difficulty in attempting to gauge general confidence in the market and how different classes of investors can interpret and respond to events in different ways. As in 2002, we focused on the three indexes that most directly measured investor confidence. The first Yale index is the One-Year Confidence Index, which indicated that institutional investor confidence fluctuated between June 2002 and May 2006, but ended higher than 2002 levels. During the same periods, individual investor confidence also fluctuated, but continued to trend downward. This implies a divergence in opinion between individual and institutional investors, but it is unclear what this difference means for overall confidence in the stock market and how restatements affect confidence. These findings do appear to suggest that developments during 2004 and 2005 had some longer-term negative effect on individual investors’ confidence, but that any negative effect on institutional investors’ confidence was temporary. The second Yale index is the Buy on Dip Confidence Index, which suggests confidence has been virtually unaffected despite fluctuations in both directions from 2004 to 2005. Since the period immediately after September 11, 2001, and the beginning of the Enron scandal, a few months later individual and institutional confidence that the stock market would rise the day after a sharp fall has diverged, with institutional dropping and individual confidence rising somewhat. However, between December 2003 and May 2006, institutional investor confidence increased from 57 to 68 percent, somewhat above its June 2002 value (62 percent), while individual investor confidence fluctuated up and down but eventually settled just 1 percentage point below its June 2002 value. The price-to-earnings ratio functioned as an indicator supporting the finding of unchanged confidence; in January 2006, the ratio was equivalent to its June 2002 value, and has remained valued at more than the historical average. The third Yale index is the Crash Confidence Index, which suggested that confidence generally has been low—less than 50 percent—for both individual and institutional investors, providing the only evidence of a similar movement. Despite remaining low since October 2002, when the market reached its lowest point in 6 years, confidence has shown a distinct increase for both individual and institutional investors. Specifically, by May 2006 this index showed an improvement from June 2002 values, with a 43 percent and a 17 percent increase for institutional and individual investors, respectively. However, confidence in the probability that a catastrophic stock market crash would not occur in the United States may provide very little insight into whether market participants are confident in the reliability of financial information transmitted to investors, because not even the accounting scandals of 2001 and 2002 triggered a major collapse in market valuations. Instead, the increase in confidence observed may merely be a vote of general confidence in the resiliency of U.S. capital markets. The number of SEC enforcement cases involving financial fraud and issuer reporting issues increased more than 130 percent from fiscal year 1998 to 2005. Moreover, in fiscal year 2005, cases involving financial fraud and issuer reporting issues constituted the largest category of enforcement actions. The resources SEC devoted to enforcement grew as well. Of the enforcement actions SEC resolved between March 1, 2002, and September 30, 2005, most of the actions were taken against companies or their directors, officers, employees, and other related parties. Finally, the newly created PCAOB also has broad investigative and disciplinary authority over public accounting firms that have registered with it and persons associated with such firms; PCAOB has brought several enforcement actions since its inception. SEC’s Division of Enforcement investigates possible violations of securities laws, including those related to financial fraud and issuer reporting issues. Between fiscal years 2001 and 2005, these types of cases have increased as a percent of SEC’s total enforcement cases from 23 to almost 30 percent (see fig. 9). From fiscal years 2002 to 2005, SEC has initiated an average of about 588 enforcement actions per year, compared to an average of 497 for fiscal years 1998 to 2001. Of these actions, an average of about 135 per year involved financial fraud or issuer reporting issues compared to an average of 97 per year for the prior period. In fiscal year 2005, cases involving financial fraud and issuer reporting issues were the largest category of enforcement actions accounting for almost one-third of the cases, followed by broker-dealer and investment company cases. For examples of some of the cases involving accounting- and/or auditing-related issues see our detailed case studies on American International Group Inc., (app. IX), Federal National Mortgage Corporation ( app. XI), and Qwest Communications International, Inc. (app. XII). In our 2002 report, we found that SEC’s enforcement function was strained because of resource challenges and an increased workload; however, as a result of several high-profile corporate failures, and financial reporting fraud, among other things, the Sarbanes-Oxley Act authorized a 65 percent increase in SEC’s 2003 appropriations, which directed the additional funding to be used in certain areas. Specifically, no fewer than 200 positions were to be used to strengthen existing program areas, including enforcement. In fiscal year 2003, enforcement resources increased over 20 percent, including 194 staff in Washington, D.C. and SEC’s regional and district offices. Moreover, between fiscal years 2003 and 2004, enforcement staffing increased about 29 percent. SEC has taken a variety of accounting- and audit-related enforcement actions against various entities and individuals, ranging from public companies and audit firms to CEOs and CPAs. Accounting-related violations identified included fraud, lying to auditors, filing misleading information with SEC, and failing to maintain proper books and records. Investigations can lead to SEC-prompted administrative or federal civil court actions. Depending on the type of proceeding, SEC can seek sanctions that include injunctions, civil money penalties, disgorgement, cease-and-desist orders, suspensions of registration, bars from appearing before the Commission, and bars from participating as an officer or director of a public company. As previously reported, most enforcement actions are settled, with respondents generally consenting to the entry of civil, judicial, or administrative orders without admitting or denying the allegations against them. We found this to be true of the auditing- and accounting-related cases we reviewed as well. For a more detailed discussion of SEC’s enforcement process, see appendix VII. About 90 percent of the more than 750 actions resolved between March 2002 and September 2005 were brought against companies or their directors, officers, employees, or other parties. Another 10 percent involved audit firms and individuals associated with firms, including audit managers, partners, and engagement auditors. In the cases involving public companies and their officials and related persons, we found that SEC has taken a variety of actions against a wide range of officials and employees. Historically, SEC was reluctant to seek civil monetary penalties against companies in financial fraud cases because such costs would be passed along to shareholders who had already suffered as a result of the violations. In the AAERs reviewed from March 2002 to September 2005, we found that SEC started to take increasingly aggressive actions against public companies, including the levy of millions of dollars in civil money penalties in 2003 and 2004. However, SEC’s position on civil money penalties against public companies continued to evolve. In January 2006, SEC outlined its position on this issue when it announced the filing of two settled actions against McAfee, Inc. and Applix, Inc. In one case the company paid a civil money penalty and in the other, the company did not. According to the release, SEC thought it was important to “provide the maximum possible degree of clarity, consistency, and predictability in explaining the way that its corporate penalty authority will be exercised.” The release discussed how the Sarbanes-Oxley Act changed the ultimate disposition of penalties, because SEC can now take penalties paid by individuals and entities in enforcement actions and add them to disgorgements for the benefit of victims through the Fair Funds provision. Under this provision, civil money penalties that SEC collects no longer go to the Department of Treasury; instead, they can be used to help compensate victims for the losses they experienced, which would include harmed shareholders. The Commission announced that it planned to more closely review actions involving civil money penalties against public companies and laid out the principles it planned to follow in making such determinations. The overarching principle appears to be that corporate penalties are an essential part of an aggressive and comprehensive enforcement program. In addition, SEC’s view of the appropriateness of the penalty against corporations versus the individuals who actually commit the violations is to be based on two considerations. First, SEC considers whether the corporation received a direct benefit as a result of the violations (e.g., the violation resulted in reduced expenses or higher revenues). Second, the degree to which the penalty will recompensate or further harm injured shareholders. Other factors, SEC will consider are: the need to deter the particular type of offense, the extent of the injury to innocent parties, whether complicity in the violation is widespread throughout the the level of intent on the part of perpetrators, the degree of difficulty in detecting the particular type of offense, the presence or lack of remedial steps by the corporation, and the extent of cooperation with the Commission and other law In our 2002 report, we also noted that Congress, market participants, and others, had questioned the lack of severity of many of the sanctions given the level of investor harm. At least one SEC official, at the time, felt that because monetary penalties are often paid by officer and director insurance policies, or are considered insignificant in relation to the violation, SEC should pursue more officer and director bars. However, the test for imposing officer and director bars was viewed as too restrictive. Since that time, the Sarbanes-Oxley Act changed the threshold for seeking officer and director bars by amending the securities acts’ requirement from “substantial unfitness” to “unfitness,” thereby making it easier for SEC to pursue officer and director bars. From March 2002 through September 2005, SEC obtained officer and director bars against hundreds of officials. Specifically, SEC resolved charges against hundreds of CFOs or chief accounting officers and CEOs with securities fraud or issuer reporting violations between March 2002 and December 2005. See appendixes IX, XI, and XII for a summary of the actions taken by SEC in three of the six cases we analyzed. SEC may also bring an enforcement action against other individuals such as officers and principals who are not part of top management (other participants and responsible parties). In the AAERs we reviewed, SEC charged such individuals with accounting-related violations that resulted in injunctions, civil monetary penalties, disgorgements, cease-and-desist orders; and officer and director bars. For example, SEC and in some cases the Department of Justice, have filed suit against several senior officers at public companies—including chairmen, chief operating officers, controllers, directors, vice presidents, and clients. These executives have been charged with securities law violations such as fraud, reporting violations, record-keeping violations, and insider trading. Although the Sarbanes-Oxley Act provided PCAOB enforcement authority over registered public accounting firms and their associated persons (which we discuss below), SEC continues to have the authority to bring actions against accounting firms. In addition to investigating violations of the securities laws, Enforcement investigates improper professional conduct by accountants and other professionals who appear before SEC, and the agency may pursue administrative disciplinary proceedings against these professionals under SEC’s Rules of Practice 102(e). If SEC finds that securities laws have been violated or improper professional conduct has occurred, it can prohibit professionals from appearing before SEC temporarily or permanently. A licensed accountant engages in improper professional conduct if he or she intentionally or knowingly violates an applicable professional standard or engages in either of the two types of negligent conduct defined under the rule. From March 2002 to September 2005, SEC has taken action against numerous firms and dozens of individuals. The actions included injunctions, civil monetary penalties, bars or suspensions from appearing before the Commission, cease- and- desist orders, officer and director bars, and censures. As mentioned previously, the Sarbanes-Oxley Act authorized PCAOB to conduct investigations concerning any acts or practices, or omissions to act, by registered public accounting firms and persons associated with such firms, or both, that may violate any provision of the act, PCAOB’s rules, the provisions of the securities laws relating to the preparation and issuance of audit reports and the obligations and liabilities of accountants with respect thereto, including SEC rules issued under the act, or professional standards. In May 2004, SEC approved PCAOB’s rules implementing this authority. When PCAOB alleges a violation, it has the authority after an opportunity for a hearing, to impose appropriate sanctions. The sanctions can range from revoking a firm’s registration or barring a person from participating in audits of public companies, to imposing monetary penalties or requirements for remedial measures, such as training, new quality control procedures, or the appointment of an independent monitor. Between May 2005 and July 2006, PCAOB has instituted and settled five disciplinary proceedings against registered public accounting firms and associated persons. These proceedings dealt with cases involving concealing information from PCAOB and submitting false information to it, in connection with a PCAOB inspection; noncompliance with PCAOB rules, independence standards, and auditing standards in auditing the financial statements; and failing to take prompt and appropriate steps in response to indications that an issuer audit client may have committed an illegal act. The associated sanctions ranged from revoking the firm’s registration, barring the involved individual from being an associated person of a registered public accounting firm, and censuring firms and associated persons. A variety of factors appear to have contributed to the increased trend in restatements, including increased accountability requirements on the part of company executives; increased focus on ensuring internal controls for financial reporting; increased auditor and regulatory scrutiny (including clarifying guidance); and a general unwillingness on the part of public companies to risk failing to restate, regardless of the significance of the event. Given the new regulatory and oversight structure, and the current operating environment, it is unclear if and when the current trend toward increasing restatements will subside. The number of restatements may continue to increase in the immediate future, as new areas of scrutiny (for example, small public company implementation of the Sarbanes-Oxley Act internal control requirements and hedge accounting rules), by SEC and others, may trigger future restatements similar to the trends experienced after the focus on accounting for leases or income taxes in early 2005. Currently, approximately 60 percent of public companies—generally smaller public companies—have yet to fully implement the internal control requirements of the Sarbanes-Oxley Act, which could also impact the number of restatements. In recent years, the larger public companies’ implementation of Section 404 requirements resulted in many companies announcing financial restatements. Alternatively, the number of restatement announcements could subside after the regulatory and firm changes called for in the Sarbanes-Oxley Act have been fully implemented and allowed to play through. Companies that announce restatements generally continue to experience decreases in market capitalization in the days around the initial announcement; however, the magnitude of the impact has significantly decreased from the period analyzed in our 2002 report. The exact reason for this decline is unclear, but may include a variety of factors such as investors’ inability to discern the reason for the restatement, varying reactions by investors about what the restatement means (e.g., whether the company is improving its disclosures), or investors’ growing insensitivity to financial statement restatement announcements. These views, in part, are supported by some investor confidence data and research including that, while investor confidence seems to have increased, investors often are unable to decipher the reason for the restatement; restatements may be viewed in various ways by investors, depending on whether they believe that the trend is part of a “cleansing process” (i.e., public companies strengthening their internal controls), or whether they merely reflect technical adjustments for compliance. SEC improved disclosure of restatement announcements in 2004 by requiring additional information on Form 8-K. However, some public companies continue to announce restatements that result in non-reliance on prior financial statements outside of the required Form 8-K (Item 4.02) filing process. That is, about 17 percent of companies announcing restatements that resulted in non-reliance between August 2004 and September 2005 failed to disclose this information under the appropriate item or failed to file an 8-K at all. While most filed the information under an item other than 4.02 in the Form 8-K, some appeared to have disclosed the information in a Form 10-K or 10-Q, which raises questions inconsistencies between the Form 8-K instructions versus staff questions-and-answers discussion concerning filing requirements under Item 4.02. The result of the potential noncompliance is that some companies continue to restate without consistently informing investors and the general public that such restatements have occurred and that previously issued financial statements should not be relied upon—which raises concerns about compliance with SEC’s revised Form 8-K disclosure requirements and the ongoing transparency and consistency of public disclosures. To better enable SEC to enforce its regulations and improve the consistency and transparency of information provided to investors about financial restatements, we recommend that SEC take specific actions to improve oversight and compliance of disclosures of certain restatements. First, SEC should direct the head of the Division of Corporation Finance to investigate the instances of potential noncompliance we, and Glass Lewis, identified, and take appropriate corrective action against any companies determined to have filed a deficient filing. Second, SEC should harmonize existing instructions and guidance concerning Item 4.02 by amending the instructions to Form 8-K and other relevant periodic filings to clearly state that an Item 4.02 disclosure on Form 8-K is required for all determinations of non-reliance on previously issued financial statements (Item 4.02), irrespective of whether such information has been disclosed on a periodic report or elsewhere. We provided a draft of this report to the Chairmen of SEC and PCAOB, for their review and comment. We received written comments from SEC and PCAOB that are summarized below and reprinted in appendixes II and III. Both SEC and PCAOB provided technical comments that were incorporated into the report as appropriate. In response to our first recommendation that the Division of Corporation Finance investigate the instances of potential noncompliance identified and take appropriate corrective action, the Director of the Division of Corporation Finance stated that SEC appreciated the recommendation and that it will continue its long history of examining instances of potential noncompliance with federal securities laws. Finally, in response to our recommendation that SEC harmonize existing instructions and guidance, SEC stated that it will carefully consider our recommendation to harmonize existing instructions and guidance related to a company’s need to notify the public that previously issued financial statements or results should not be relied upon. In commenting on the draft report, the Chairman of PCAOB stated that as the organization charged by the Sarbanes-Oxley Act with overseeing the audit of public companies, the report’s findings on the causes of, and trends in restatements by public companies would be useful to PCAOB’s oversight efforts. As agreed with your office, we plan no further distribution of this report until 30 days from its issuance unless you publicly release its contents sooner. At that time, we will send copies of this report to the Chairman of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the Senate Subcommittee on Securities and Investment, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; the Chairman and Ranking Minority Member, House Committee on Financial Services; and other interested congressional committees. We will also send copies to the Chairman of the SEC and the Chairman of the PCAOB and will make copies available to others upon request. In addition, this report is also available on GAO Web site at no charge at http://www.gao.gov. If you have any questions concerning this report, please contact Orice M. Williams at (202) 512-5837 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix XV for a list of other staff who contributed to the report.
In 2002, GAO reported that the number of restatement announcements due to financial reporting fraud and/or accounting errors grew significantly between January 1997 and June 2002, negatively impacting the restating companies' market capitalization by billions of dollars. GAO was asked to update key aspects of its 2002 report (GAO-03-138). This report discusses (1) the number of, reasons for, and other trends in restatements; (2) the impact of restatement announcements on the restating companies' stock prices and what is known about investors' confidence in U.S. capital markets; and (3) regulatory enforcement actions involving accounting- and audit-related issues. To address these issues, GAO collected restatement announcements meeting GAO's criteria, calculated and analyzed the impact on company stock prices, obtained input from researchers, and analyzed selected regulatory enforcement actions. While the number of public companies announcing financial restatements from 2002 through September 2005 rose from 3.7 percent to 6.8 percent, restatement announcements identified grew about 67 percent over this period. Industry observers noted that increased restatements were an expected byproduct of the greater focus on the quality of financial reporting by company management, audit committees, external auditors, and regulators. GAO also observed the following trends: (1) cost- or expense-related reasons accounted for 38 percent of the restatements, including lease accounting issues, followed in frequency by revenue recognition issues; and (2) most restatements (58 percent) were prompted by an internal party such as management or internal auditors. In the wake of increased restatements, SEC standardized disclosure requirements by requiring companies to file a specific item on the Form 8-K when a company's previously reported financials should no longer be relied upon. However, between August 2004-September 2005, about 21 percent of the companies GAO identified as restating did not appear to file the proper disclosure when they announced their intention to restate. These companies continued to announce intentions to restate previous financial statements results in a variety of other formats. Although representing about 0.4 percent of the market capitalization of the major exchanges, which was $17 trillion in 2005, the market capitalization of companies announcing restatements between July 2002 and September 2005 decreased $63 billion when adjusted for market movements ($43 billion unadjusted) in the days around the initial restatement announcement. Researchers generally agree that restatements can negatively affect overall investor confidence, but it is unclear what effects restatements had on confidence in 2002-2005. Some researchers noted that investors might have grown less sensitive to the announcements. Others postulated that investors had more difficulty discerning whether restatements represented a response to aggressive or abusive accounting practices, complex accounting standards, remediation of past accounting deficiencies, or technical adjustments. Although researchers generally agree that restatements can have a negative effect on investor confidence, the surveys, indexes, and other proxies for investor confidence that GAO reviewed did not indicate definitively whether investor confidence increased or decrease since 2002. As was the case in the 2002 report, a significant portion of SEC's enforcement activities involved accounting- and auditing-related issues. Enforcement cases involving financial fraud- and issuer-reporting issues ranged from about 23 percent of total actions taken to almost 30 percent in 2005. Of the actions resolved between March 1, 2002, and September 30, 2005, about 90 percent were brought against public companies or their directors, officers, and employees, or related parties; the other 10 percent involved accounting firms and individuals involved in the external audits of these companies.
Nationwide implementation of E911 by local wireline telephone companies began in the 1970s; today, 99 percent of the population is covered by wireline 911 service. With wireline E911 service, emergency calls are automatically routed to the appropriate PSAP and the call taker receives the telephone number and street address of the caller. In 1996, FCC responded to the rising number of mobile telephone subscribers and the resulting increase in wireless 911 calls by adopting rules for wireless E911 that established a two-phase implementation approach for the wireless carriers and set deadlines for wireless carriers regarding their part in E911 deployment. FCC required that (1) by April 1998, or within 6 months of a request from a PSAP, wireless carriers be prepared to provide the PSAP with the wireless phone number of the caller and the location of the cell site receiving the 911 call (Phase I information); and (2) by October 2001, or within 6 months of receiving a request from a PSAP, wireless carriers be prepared to provide the PSAP with the geographic coordinates of the caller’s location with greater precision, generally within 50 to 300 meters (Phase II information). As shown in figure 1, the wireless carriers, local exchange carriers (LECs), and PSAPs must have appropriate equipment and interconnections for wireless E911 calls to be sent to and received by PSAPs with the caller’s location information. For example, wireless carriers must finance the implementation of a caller location solution and test their equipment to verify its accuracy. Local exchange carriers are generally responsible for ensuring that all the necessary connections between wireless carriers, PSAPs, and databases have been installed and are operating correctly. The original E911 system was designed to carry only the caller's telephone number with the call, and the associated fixed address was obtained from an established database. Wireless E911, however, requires more data items, and the mobile caller's location must be obtained during the call and delivered to the PSAP separately using additional data delivery capabilities. In order to translate the latitude and longitude location information into a street address, PSAPs usually must acquire and install mapping software. PSAPs may also need to acquire new computers to receive and display this information. Getting PSAPs the technology needed to receive wireless E911 location information is primarily a state and local responsibility because PSAPs serve an emergency response function that has traditionally fallen under state or local jurisdiction. FCC has no authority to set deadlines for the PSAPs’ readiness. The ENHANCE 911 Act of 2004 was enacted to coordinate 911 and E911 services at the federal, state, and local level; and to ensure that the taxes, fees, or charges imposed for enhancing 911 services are used only for the purposes for which the funds are collected. The act called for the creation of an E911 Implementation Coordination Office to improve E911 coordination and communication. This office will be operated jointly by the National Highway Traffic Safety Administration (NHTSA) and the National Telecommunications and Information Administration (NTIA), and will be housed at NHTSA. Although the office had not received an appropriation as of January 2006, a DOT official told us that NHTSA and NTIA are working together to delineate their respective responsibilities. The act also authorized matching federal grants for eligible state, local, and tribal entities for the deployment and operation of Phase II E911 services. The act requires applicants for a matching federal grant to certify that no portion of any designated state and local E911 funds are being obligated or expended for any purpose other than the purposes for which the funds are designated. The act authorized $250 million per year for matching grants for fiscal years 2005 through 2009. However, no funds were appropriated for these grants in 2005. Significant progress has been made in implementing wireless E911 services since our last report on this topic in November 2003. At that time, using data from NENA, we reported that nearly 65 percent of PSAPs nationwide had implemented Phase I wireless E911 services and 18 percent of PSAPs had implemented Phase II wireless E911 with at least one wireless carrier. Since that time, there has been a marked increase in both of these percentages. As of January 2006, NENA reports that nearly 80 percent of PSAPs nationwide had implemented Phase I wireless E911 services and 57 percent had implemented Phase II with at least one wireless carrier. At the county level, NENA reports that approximately 70 percent of counties nationwide have implemented Phase I wireless E911 services and 44 percent of the counties have implemented Phase II with at least one wireless carrier. According to NENA, many of the PSAPs that have implemented Phase I and Phase II are in areas that cover higher concentrations of people, and as a result, approximately 85 percent of the U.S. population is now covered by Phase I and nearly 69 percent by Phase II with at least one wireless carrier. See figure 2 for nationwide deployment of Phase I and II based on population coverage by state. Wash. Mont. N.Dk. Ore. Minn. Vt. S.Dk. Wi. N.Y. N.H. Mass. Wyo. Mich. P. Ner. R.I. Conn. N.J. Nev. Ill. Ind. Del. Clif. Colo. W.V. K. V. Mo. Ky. Md. D.C. N.C. Tenn. Ariz. Okl. N.Mex. Ark. S.C. Miss. Al. G. Tex. L. Fl. Wash. Mont. N.Dk. Ore. Minn. Vt. S.Dk. Wi. N.Y. N.H. Mass. Wyo. Mich. Ner. P. R.I. Conn. N.J. Nev. Ill. Ind. Del. Clif. Colo. W.V. K. V. Mo. Ky. Md. D.C. N.C. Tenn. Ariz. N.Mex. Okl. Ark. S.C. Miss. Al. G. Tex. L. Fl. While progress is being made in wireless E911 implementation, the estimates from state contacts indicate that no clear picture is emerging on when Phase II will be fully deployed nationwide. As noted earlier, FCC has no authority to set deadlines for PSAPs to implement wireless E911 services. As a result, there is no federal requirement for full wireless E911 implementation and states may or may not have set their own deadlines for implementation. In our survey of state E911 contacts, we asked respondents to provide us with an estimate of when they believed their state would have wireless Phase II service fully in place with at least one wireless carrier per PSAP. We found that state E911 contacts offered a wide range of estimated Phase II completion dates. As shown in figure 3, 10 of 44 state contacts who responded to our survey indicated that Phase II was already in place throughout their state. Eight state contacts noted that they would have Phase II in place for all of their PSAPs with at least one wireless carrier within a year. Thirteen state contacts provided a range of 1 to 5 years for Phase II to be implemented, with three state contacts responding that it would take more than 5 years. Furthermore, five state contacts noted that their state might never be 100 percent complete for Phase II service. For example, one state contact noted that four rural counties opted not to apply for state funding to implement wireless E911 and two of these counties have only decided to implement wireline E911. Contacts in five states had no basis to judge when Phase II would be in place in their states. Based on our survey results and NENA data, we found most states obtain E911 funds through state-mandated surcharges collected by wireless carriers from the carriers’ wireless subscribers. States have the discretion to determine how these funds will be managed and distributed. Some states allocate funds using a formula based approach, while others distribute the funds based on PSAP requests. According to our survey results, 35 states had established written criteria on the allowable uses of E911 funds. Examples of allowable uses for the funds included the purchase of equipment upgrades, software packages, and training of personnel. At present, state and local governments determine how to pay for PSAP wireless E911 upgrades. We found, based on our survey results and NENA data, that 48 states and the District of Columbia collect surcharges to cover the costs of implementing wireless E911 (see fig. 4). For these states, funds are collected by wireless carriers from their subscribers. The other two states do not impose surcharges on wireless subscribers, but still have a wireless E911 funding mechanism in place. Specifically, the state E911 contact for Missouri told us that the state uses funds from the local general revenue, local 911 taxes, and wireline funds for E911 implementation; and the Vermont state E911 contact said the state uses funds from the state’s Universal Service Fund, which supports various telecommunications programs. For states that impose surcharges, the surcharge amount is usually established in state law. Responses to our survey indicated that the per-subscriber surcharges varied from state to state and ranged from $0.20 to $3.00 per month. We also found the surcharge amount could vary within a state. For example, one state has a maximum monthly surcharge amount of $1.50, and although most of the counties collect the maximum amount, several counties collect less than the maximum. Based on the responses to our survey, several states indicated that insufficient funding collected for wireless E911 was impeding the state’s ability to implement this service. We heard from one state that relies on funds collected from both the wireline and wireless surcharges to fund E911 that due to Hurricane Katrina, the state expected to see a drop in wireline funding over the next 2 to 3 years. A county official from that state said that because many residents and businesses impacted by the storm have not reestablished telephone service, the state is not receiving telephone fees from those residents. Another state reported that one of the biggest issues in implementing wireless E911 is the inability to collect funds from seasonal populations in many of the state’s resort areas. Small towns in the state experience a large influx of tourists during various times of the year. However, because the state collects funds based on the billing address of the subscriber, counties in the state are limited in their ability to cover the costs of E911 services that out-of-state tourists expect while visiting local resort areas. States and local governments have the authority to determine how they will manage and disburse their E911 funds. Of the 31 states that answered our question pertaining to the management of E911 funds, 23 indicated that the funds are managed at the state level, 6 said funds are managed locally, and 2 others indicated that the funds are managed by a mix of state and local entities. We found that various state-level entities can have authority to manage the funds collected for wireless E911 implementation, including the public utility commission, the treasury office, and state-level boards. In one state, for example, a state-level board comprised of members from municipal organizations, PSAPs, state and local law enforcement agencies, local exchange service providers, and the wireless carriers industry established the criteria and guidelines for administering the funds. Of the seven states indicating that the local government manages the funds, one state said that the governing boards of 54 local 911 jurisdictions (51 counties and 3 cities) have the ultimate authority over the expenditure of wireless E911 funds. Methods of disbursement also varied. Some states use formulas based on various criteria to determine the amount of funds distributed to different localities. For example, a number of states allocate the funds to localities based on criteria such as the volume of 911 calls made in the jurisdiction or the number of wireless subscribers. Other states allocate funds based on PSAP requests. One state reported that it reimburses county governments and providers for the costs they have incurred to implement wireless E911 services. Alternatively, the PSAPs in another state must first request funding from the state, which a state-level office must then approve. As part of our survey, we asked the state E911 contacts if their states had established written criteria on the allowable uses of funds collected for the purposes of wireless E911 implementation. Of the 38 state contacts who responded to this question, 35 reported that their state had established written criteria, while the other 3 indicated that written criteria had not been established. Examples of allowable uses of funding include the purchase of equipment upgrades or software, personnel costs directly attributable to the delivery of 911 service, and costs related to the maintenance of Phase I and II services. For example, according to one state, its law permits wireless E911 funds to cover the salaries, benefits, and uniforms of 911 service employees such as call takers, dispatchers, and supervisors. We also asked the state E911 contacts if the state had any kind of oversight procedures to control the use of E911 funds. Of the 38 state contacts who responded to this question, 33 reported that their state had established oversight procedures to control the use of E911 funds, 3 others indicated no oversight procedures had been established, and 2 contacts did not know. According to our survey results, audits were the most common approach used to oversee the use of E911 funds. For example, one state reported that the wireless E911 fund and the state’s wireless E911 services board is audited annually by the state’s Auditor of Public Accounts and that the reports are available to the public online. Based on the responses of the 44 state E911 contacts who completed our survey, four states that collected funds for the purposes of wireless E911 implementation made those funds available or used them for purposes unrelated to E911 during 2005. For the six states and the District of Columbia that did not respond to our survey, we do not know whether they used any of their E911 funds for unrelated purposes. Four other states were unsure if their wireless E911 funds have been used for unrelated purposes because the funds are collected and maintained at the local level. See figure 5 for a complete breakout by state of their use of E911 funds during 2005, along with their progress in implementing E911 in their counties. For the four states that reported E911 funds were made available or used for purposes not related to E911 during 2005, the state contacts reported that the E911 funds were transferred to their state’s general fund. For example, the E911 contact for North Carolina reported that E911 funds were transferred to the general fund to help balance the state budget. According to the E911 contact for Virginia, funds were transferred to both the general fund and to the state police, which responds to emergency calls in some areas of the state. One of these four states, Rhode Island, has implemented Phase II services in all of its counties and Virginia has implemented Phase II for 83 percent of its counties. For the remaining two states, Illinois has implemented Phase II in 47 percent of its counties and North Carolina for 71 percent of its counties. One state responded to our survey that, while E911 funds have not been made available for other purposes, approximately $72 million in state wireline and wireless E911 fees collected have not been appropriated to the state 911 program. In other words, the funds remain in dedicated E911 accounts, but are “frozen” and are not being used to deploy or maintain E911 services. We heard from four other states that because funds for wireless E911 are collected and managed by local jurisdictions without any state involvement, the state is unsure if wireless E911 funds have been used for purposes other than wireless E911 implementation. For example, one state E911 contact told us that the state currently has no mechanism to monitor the use of wireless E911 funds at the local level. However, this contact further said that if a federal grant program for wireless E911 is funded, the state could establish a mechanism to validate that local jurisdictions were using wireless E911 funds only for allowable purposes. We provided FCC with a draft of this report for their review and comment. In response, FCC provided technical comments that we incorporated where appropriate. We are sending copies of this report to interested congressional committees and the Chairman, FCC. We will make copies available to others upon request. The report is available at no charge on GAO’s Web site at http://www.gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or goldsteinm@gao.gov. Key contributors to this report were John Finedore, Assistant Director; Kimberly Berry, Andy Clinton, Stuart Kaufman, Sally Moino, Josh Ormond, Jay Smale, and Mindi Weisenbloom. The ENHANCE 911 Act of 2004 required us to review the imposition and use of taxes, fees, or other charges by states or localities that are designated to improve emergency communications services, including “enhanced 911” (E911). As such, we are reporting on (1) the progress made in implementing wireless E911 services throughout the country, (2) the states and localities that have established taxes, fees, or charges for wireless E911 implementation, and (3) the states or localities that have used funds collected for the purposes of wireless E911 for unrelated purposes. To obtain general information on wireless E911 implementation, we interviewed officials from the National Emergency Number Association (NENA), the National Association of State 9-1-1 Administrators, and the Federal Communications Commission (FCC). We also met with officials from the Department of Transportation to learn the status of the E911 Implementation Coordination Office. To obtain information pertaining to the collection, management, and use of wireless E911 funds at the state and local level, we developed and administered a Web-based survey to state- level E911 contacts. The state E911 contacts are listed on FCC’s Web site as the point of contact for emergency communications in their states and were provided by the governor of each state in response to a request from FCC. From September 21, 2005, through September 28, 2005, we conducted a series of “pretests” with state E911 contacts to help further refine our questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. Upon completion of the pretests and development of the final survey questions and format, we sent an announcement of the upcoming survey to the state E911 contacts (including the District of Columbia) on October 4, 2005. They were notified that the survey was available online on October 6, 2005. We sent follow-up e-mail messages to non-respondents as of October 30, 2005, and then attempted several times to contact those who had not completed the survey. The survey was available online until January 20, 2006. Of the population of 51 state E911 contacts who were asked to participate in our survey, we received 44 completed questionnaires for an overall response rate of 86 percent. Although the individual listed as the Wyoming state E911 contact was unable to answer the questionnaire, a representative at the county level completed it. We did not receive completed questionnaires from Alaska, Colorado, the District of Columbia, Nevada, New York, Oklahoma, and South Dakota. We administered the survey between October 2005 and January 2006. To view selected results of the survey, go to http://www.gao.gov/cgi-bin/getrpt?GAO-06-400sp. The practical difficulties of conducting surveys may introduce errors commonly referred to as “nonsampling error.” For example, differences in how a particular question is interpreted or the sources of information available to respondents may introduce error. To minimize nonsampling error, we worked with a social science survey specialist to develop the questionnaire and conducted three pretests. In addition, steps were taken during the data analysis to minimize error further, such as performing computer analyses to identify inconsistencies and completing a review of data analysis by an independent reviewer. We contacted state budget officials for the four states that reported using funds collected for the purpose of E911 implementation for unrelated purposes to verify the information we received in response to our survey. Other than this, we did not independently verify the survey results. To provide information on the progress made in deploying wireless E911, in addition to the survey, we used NENA data current as of January 2006. To assess the reliability of NENA’s data regarding the number of public safety answering points receiving Phase II data, we interviewed knowledgeable officials from NENA about their data collection methods and reviewed any existing documentation relating to the data sources. We determined that the data were sufficiently reliable for the purposes of this report. We conducted our review between February 2005 and January 2006 in accordance with generally accepted government auditing standards.
"Enhanced 911" (E911) service refers to the capability of public safety answering points to automatically receive an emergency caller's location information. An industry association estimates that nearly 82 million 911 calls are placed each year by callers using mobile phones. Wireless E911 technology provides emergency responders with the location and callback number of a person calling 911 from a mobile phone. The ENHANCE 911 Act of 2004 called for GAO to study state and local use of funds collected for the purpose of wireless E911 implementation. We are reporting on (1) the progress made in implementing wireless E911 services throughout the country, (2) the states and localities that have established taxes, fees, or charges for wireless E911 implementation, and (3) the states or localities that have used funds collected for the purposes of wireless E911 for unrelated purposes. To address these issues, we surveyed state-level E911 contacts on the collection and use of E911 funds. Of the 51 state E911 contacts (including the District of Columbia) who were asked to participate in our survey, we received 44 responses. We provided the Federal Communications Commission (FCC) with a draft of this report and FCC provided technical comments that we incorporated. Significant progress has been made towards implementing wireless E911 throughout the country since our November 2003 report. Deployment of wireless E911 usually proceeds through two phases: Phase I provides general location information by identifying the cell tower or cell site that is receiving the wireless call. Phase II provides more precise caller-location information, within 50 to 300 meters in most cases. We reported in November 2003, that nearly 65 percent of the more than 6,000 public safety answering points nationwide were capable of receiving Phase I information with wireless 911 calls and 18 percent had implemented Phase II wireless E911 with at least one wireless carrier. Currently, according to the National Emergency Number Association (NENA), nearly 80 percent of public safety answering points are capable of receiving Phase I location information and 57 percent have implemented Phase II for at least one wireless carrier. However, based on our survey results, full implementation is still several years away in many states. In response to our survey, three state E911 contacts reported that it will take more than 5 years to have wireless E911 completely implemented in their states, and five others said that the technology might never be fully implemented in their states. Based on our survey results and NENA data, we found that nearly all states--48 states and the District of Columbia--require the wireless carriers to collect surcharges from their subscribers to cover the costs associated with implementing wireless E911. Responses to our survey showed the per-subscriber surcharges ranged from $0.20 to $3.00 per month. The two states that do not impose surcharges fund E911 through general revenue or the state's Universal Service Fund, which was established to support various telecommunications programs. States have the discretion to determine how they will manage and distribute the funds and we found the management of the funds and methods of disbursement varied. According to our survey results, many of the states that responded have written criteria on the allowable uses of E911 funds. Allowable uses of the E911 funds include purchasing equipment upgrades and software packages. Four state E911 contacts responded to our survey that their states did not use all of the funds collected for E911 on E911 implementation purposes during 2005. Six states, and the District of Columbia, did not respond to our survey so we do not know whether those states used E911 funds or made them available for other purposes. Four other states reported that they were unsure if all E911 funds were used solely for E911 purposes because the funds are collected and managed at the local level. The four states that reported that E911 funds were made available or used for purposes not related to E911 indicated that the E911 funds were transferred to their state's general fund. For example, one state told us that E911 funds were transferred to the general fund to help balance the state budget. Another state reported that some E911 funds were transferred to the state police since they answer emergency calls in some areas of the state.
Currently, public safety officials primarily communicate with one another using LMR systems that support voice communication and usually consist of handheld portable radios, mobile radios, base stations, and repeaters, as described: Handheld portable radios are typically carried by emergency responders and tend to have a limited transmission range. Mobile radios are often located in vehicles and use the vehicle’s power supply and a larger antenna, providing a greater transmission range than handheld portable radios. Base station radios are located in fixed positions, such as dispatch centers, and tend to have the most powerful transmitters. A network is required to connect base stations to the same communication system. Repeaters increase the effective communication range of handheld portable radios, mobile radios, and base station radios by retransmitting received radio signals. Figure 1 illustrates the basic components of an LMR system. LMR systems are generally able to meet the unique requirements of public safety agencies. For example, unlike commercial cellular networks, which can allow seconds to go by before a call is set up and answered, LMR systems are developed to provide rapid voice call-setup and group- calling capabilities. When time is of the essence, as is often the case when public safety agencies need to communicate, it is important to have access to systems that achieve fast call-set up times. Furthermore, LMR systems provide public safety agencies “mission critical” voice capabilities—that is, voice capabilities that meet a high standard for reliability, redundancy, capacity, and flexibility. Table 1 describes the key elements for mission critical voice capabilities, as determined by the National Public Safety Telecommunications Council (NPSTC). According to NPSTC, for a network to fully support public safety mission critical voice communications, each of the elements in table 1 must address part of the overall voice communications services supported by the network. In other words, NPSTC believes a network cannot be a mission critical network without all of these elements. Furthermore, unlike commercial networks, mission critical communication systems rely on “hardened” infrastructure, meaning that tower sites and equipment have been designed to provide reliable communications even in the midst of natural or man-made disasters. To remain operable during disasters, mission critical communications infrastructure requires redundancy, back- up power, and fortification against environmental stressors such as extremes of temperature and wind. Nationwide, there are approximately 55,000 public safety agencies. These state and local agencies typically receive a license from FCC to operate and maintain their LMR voice systems. Since these systems are supported by state and local revenues, the agencies generally purchase equipment and devices using their own local budgets without always coordinating their actions with nearby agencies, which can hinder interoperability. Since 1989, public safety associations have collaborated with federal agencies to establish common technical standards for LMR systems and devices called Project 25 (P25). The purpose of these technical standards is to support interoperability between different LMR systems, that is, to enable seamless communication across public safety agencies and jurisdictions. While the P25 suite of standards is intended to promote interoperability by making public safety systems and devices compatible regardless of the manufacturer, it is a voluntary standard and currently incomplete. As a result, many LMR devices manufactured for public safety are not compatible with devices made by rival manufacturers, which can undermine interoperability. The federal government plays an important role in public safety communications by providing funding for emergency communication systems and working to increase interoperable communication systems. Congress, in particular, has played a critical role by designating radio frequency spectrum for public safety use. Furthermore, Congress can direct action by federal agencies and others in support of public safety. For example, through the Homeland Security Act of 2002, Congress established DHS and required the department, among other things, to develop a comprehensive national incident management system comprising all levels of government and to consolidate existing federal government emergency response plans into a single, coordinated national response plan. The number of licenses excludes the 700 MHz public safety broadband license, the 4.9 GHz band, and public safety point-to-point microwave licenses. for public safety broadband use.its Public Safety and Homeland Security Bureau (PSHSB), which is responsible for developing, recommending, and administering FCC’s policies pertaining to public safety communications issues. FCC has issued a series of orders and proposed rulemakings and adopted rules addressing how to develop a public safety broadband network, some of which are highlighted: In September 2006, FCC established In 2007, FCC adopted an order to create a nationwide broadband network with the 10 MHz of spectrum designated for a public safety broadband network and the adjacent 10 MHz of spectrum––the Upper 700 MHz D Block, or “D Block.” As envisioned by FCC, this nationwide network would be shared by public safety and a commercial provider and operated by a public/private partnership. However, when FCC presented the D Block for auction in 2008 under these conditions, it received no qualifying bids and thus was not licensed. Subsequently it was found that the lack of commercial interest in the D Block was due in part to uncertainty about how the public/private partnership would work. Although many stakeholders and industry participants called for the D Block to be reallocated to public safety, an alternate view is that auctioning the D Block for commercial use would have generated revenues for the U.S. Treasury. As noted previously, a provision in pending legislation, the Middle Class Tax Relief and Job Creation Act of 2012, reallocates the D Block to public safety. In 2007, FCC licensed the 10 MHz of spectrum that FCC assigned for public safety broadband use to the Public Safety Spectrum Trust (PSST), a nonprofit organization representing major national public safety associations. This 10 MHz of spectrum, located in the upper 700 MHz band is adjacent to the spectrum allocated to public safety for LMR communications. As the licensee, the PSST’s original responsibilities included representing emergency responders’ needs for a broadband network and negotiating a network sharing agreement with the winner of the D Block auction. However, since the D Block was not successfully auctioned, FCC stayed the majority of the rules guiding the PSST. In 2009, public safety entities began requesting waivers from FCC’s rules to allow early deployment of broadband networks in the 10 MHz of spectrum licensed to the PSST, and since 2010, FCC granted waivers to 22 jurisdictions for early deployment. These jurisdictions had to request waivers because the rules directing the deployment of a broadband network were not complete. In this report, we refer to the 22 entities receiving waivers as “waiver jurisdictions.” As a condition of these waivers, FCC required that local or regional networks would interoperate with each other and that all public safety entities in the geographic area would be invited to use the new networks. In addition, FCC required that all equipment operating on the 700 MHz public safety broadband spectrum comply with Long Term Evolution (LTE), a commercial data standard for wireless technologies.shown in table 2, of the 22 jurisdictions that successfully petitioned for waivers, only 8 received federal funding. Seven waiver jurisdictions received funding from NTIA’s Broadband Technology Opportunities Program (BTOP), a federal grant program authorized through the American Recovery and Reinvestment Act of 2009 that had several As purposes, including promoting the expansion of broadband infrastructure. In January 2011, FCC adopted rules and proposed further rules to create an effective technical framework for ensuring the deployment and operation of a nationwide, interoperable public safety broadband network. As part of this proceeding, FCC sought comment on technical rules and security for the network as well as testing of equipment to ensure interoperability. The comment period for the proceeding closed on April 11, 2011, and FCC received comments from waiver jurisdictions, consultants, and manufacturers, among others. As of February 7, 2012, FCC did not have an expected issuance date for its final rules. In addition to FCC, DHS has been heavily involved since its inception in supporting public safety by assisting federal, state, local, and regional emergency response agencies and policy makers with planning and implementing interoperable communication networks. Within DHS, several divisions have focused on improving public safety communications. DHS also has administered groups that bring together stakeholders from all levels of government to discuss interoperability issues: The Emergency Communications Preparedness Center (ECPC) was created in response to Hurricane Katrina by the 21st Century Emergency Communications Act of 2006 to help improve intergovernmental emergency communications information sharing. The ECPC has 14 member agencies with a goal, in part, to support and promote interoperable public safety communications through serving as a focal point and clearing house for information. It has served to facilitate collaboration across federal entities involved with public safety communications. SAFECOM is a communications program that provides support, including research and development, to address interoperable communications issues. Led by an executive committee, SAFECOM has members from state and local emergency responders as well as intergovernmental and national public safety communications associations. DHS draws on this expertise to help develop guidance and policy. Among other activities, SAFECOM publishes annual grant guidance that outlines recommended eligible activities and application requirements for federal grant programs providing funding for interoperable public safety communications. Within Commerce, NTIA and NIST are also involved in public safety communications by providing research support to the PSCR program. The PSCR serves as a laboratory and advisor on public safety standards and technology. It provides research and development to help improve public safety interoperability. For example, the PSCR has ongoing research in many areas related to communications, including the voluntary P25 standard for LMR communication systems, improving public safety interoperability, and the standards and technologies related to a broadband network. PSCR also conducts laboratory research to improve the audio and video quality for public safety radios and devices. Congress has appropriated billions in federal funding over the last decade to public safety in grants and other assistance for the construction and maintenance of LMR voice communication systems and the purchase of communication devices. Approximately 40 grant programs administered by nine federal agencies have provided this assistance for public safety. Some of the grants provided a one-time infusion of funds, while other grants have provided a more consistent source of funding. For example, in 2007, the one-time Public Safety Interoperable Communications Grant Program awarded more than $960 million to assist state and local public safety agencies in the acquisition, planning, deployment, or training on interoperable communication systems.Grant Program has provided $6.5 billion since 2008, targeting a broad scope of programs that enhance interoperability for states’ emergency medical response systems and regional communication systems, as well as planning at the community level to improve emergency preparedness. See appendix II for more information about the grant programs. However, the Homeland Security State and local governments have also invested millions of dollars of their own funds to support public safety voice communications, and continue to do so. Jurisdictions we visited that received federal grants to support the construction of a broadband network have continued to invest in the upgrade and maintenance of their current LMR voice systems. For example, Adams County, Colorado, has spent about $19.7 million since 2004 on its LMR system, including $6.9 million in local funds, supplemented with $12.8 million in federal grants. Mississippi, another jurisdiction we visited that is constructing a statewide broadband network, has spent about $214 million on its LMR network, including $57 million in general revenue bonds and $157 million in federal grants. Officials in the jurisdictions we contacted stressed the importance of investing in the infrastructure of their LMR networks to maintain the reliability and operability of their voice systems, since it was unclear at what point the broadband networks would support mission critical voice communications. In addition to upgrading and maintaining their LMR networks, many jurisdictions are investing millions of dollars to meet FCC’s requirement that communities use their spectrum more efficiently by reducing the bandwidth on which they operate. In addition to direct federal funding, the federal government has allocated more than 100 MHz of spectrum to public safety over the last 60 years. The spectrum is located in various frequency bands since FCC assigned frequencies to public safety in new bands over time as available frequencies became congested and public safety’s need for spectrum Figure 2 displays the spectrum allocated to public safety, increased.which is located between 25 MHz and 4.9 GHz. As noted previously, the Middle Class Tax Relief and Job Creation Act of 2012 requires FCC to reallocate the D Block from commercial use to public safety use. Public safety agencies purchase radios and communication devices that are designed to operate on their assigned frequency. Since different frequencies of radio waves have different propagation characteristics, jurisdictions typically use the spectrum that is best suited to their particular location. For example, very high frequency (VHF) channels— those located between 30 and 300 MHz—are more useful for communications that must occur over long distances without obstruction from buildings, since the signals cannot penetrate building walls very well. As such, VHF signals are well suited to rural areas. On the other hand, ultra high frequency (UHF) channels—those located between 300 MHz and 3 GHz—are more appropriate for denser urban areas as they have more capacity and can penetrate buildings more easily. When we visited Adams County, Colorado, we learned that public safety officials in the mountainous areas of Colorado use the 150 MHz and 450 MHz bands because of the range of the signals and their ability to navigate around the natural geography. However, public safety officials in the Denver, Colorado, metropolitan area operate on the 700 and 800 MHz frequency bands which can support more simultaneous voice transmissions, such as communications between fire, police, public utility, and transportation officials. The current public safety LMR systems use their allocated spectrum to facilitate reliable mission critical voice communications. Such communications need to be conveyed in an immediate and clear manner regardless of environmental and other operating conditions. For example, while responding to a building fire, firefighters deep within the building need the ability to communicate with each other even if they are out of range of a wireless network. The firefighters are able to communicate on an LMR system because their handheld devices operate on as well as off network. Currently, emergency response personnel rely exclusively on their LMR systems to provide mission critical voice capabilities. One waiver jurisdiction we visited, Mississippi, is constructing a new statewide LMR system and officials there noted a high degree of satisfaction with the planned LMR system. They said the new system is designed to withstand most disasters and when complete, will provide interoperability across 97 percent of the state. Public safety officials in the coastal region of the state have already used the system to successfully respond to problems caused by the Mississippi River flooding in the spring of 2011. LMR public safety communication systems also are able to provide some data services but the systems are constrained by the narrowband channels on which they operate. These channels allow only restricted data transfer speeds, thus limiting capacity to send and receive data such as text and images, or to access existing databases. Some jurisdictions supplement their LMR systems with commercial data services that give them better access to applications that require higher data transfer rates to work effectively. However, commercial service also has limitations, such as the lack of priority access to the network in an emergency situation. According to DHS, interoperability of current public safety communications has improved as a result of its efforts. In particular, the DHS National Emergency Communications Plan established a strategy for improving emergency communications across all levels of government, and as a result, all states have a statewide interoperability coordinator and governing body to make strategic decisions within the state and guide current and future communications interoperability. According to DHS, it has worked with states to help them evaluate and improve their emergency communications abilities. DHS also helped to develop the Interoperability Continuum, which identifies five critical success elements to assist emergency response agencies and policy makers to plan and implement interoperability solutions for data and voice communications. Furthermore, DHS created guidance to ensure a consistent funding strategy for federal grant programs that allow recipients to purchase communications equipment and enhance their emergency response capabilities. As we have reported in the past, interoperability has also improved due to a variety of local technical solutions. For example, FCC established mutual aid channels, whereby specific channels are set aside for the sole purpose of connecting incompatible systems. Another local solution is when agencies maintain a cache of extra radios that they can distribute during an emergency to other first responders whose radios are not interoperable with their own. However, despite decades of effort, a significant limitation of current LMR systems is that they are not fully interoperable. One reason for the lack of interoperability is the fragmentation of spectrum assignments for public safety, since existing radios are typically unable to transmit and receive in all these frequencies. Therefore, a rural area using public safety radios operating on VHF spectrum will not be interoperable with radios used in an urban area that operate on UHF spectrum. While radios can be built to operate on multiple frequencies, which could support greater interoperability, this capability can add significant cost to the radios and thus jurisdictions may be reluctant to make such investments. In addition, public safety agencies historically have acquired communication systems without concern for interoperability, often resulting in multiple, technically incompatible radio systems. This is compounded by the lack of mandatory standards for the current LMR systems or devices. Rather, the P25 technical standards remain incomplete and voluntary, creating incompatibility among vendors’ products. Furthermore, local jurisdictions are often unable to coordinate to find solutions. Public safety communication systems are tailored to meet the unique needs of individual jurisdictions or public safety entities within a given region. As such, the groups are reluctant to give up management and control of their systems. Numerous federal entities have helped to plan and begin to define a technical framework for a nationwide public safety broadband network. In particular, FCC, DHS, and Commerce’s PSCR program, have coordinated their planning and made significant contributions by developing technical rules, educating emergency responders, and creating a demonstration network, respectively. Since 2008, FCC has: Created a new division within its PSHSB, called the Emergency Response Interoperability Center (ERIC), to develop technical requirements and procedures to help ensure an operable and interoperable nationwide network. Convened two advisory committees, the ERIC Technical Advisory Committee and the Public Safety Advisory Committee, that provide advice to FCC. The Technical Advisory Committee’s appointees must be federal officials, elected officers of state and local government, or a designee of an elected official. It makes recommendations to FCC and ERIC regarding policies and rules for the technical aspects of interoperability, governance, authentication, and national standards for public safety. ERIC’s Public Safety Advisory Committee’s members can include representatives of state and local public safety agencies, federal users, and other segments of the public safety community, as well as service providers, equipment vendors, and other industry participants. Its purpose is to make recommendations for a technical framework that will ensure interoperability on a nationwide public safety broadband network. Defined technical rules for the broadband network, including identifying LTE as the technical standard for the network, which FCC and public safety agencies believe is imperative to the goal of achieving an interoperable nationwide broadband network. In addition, FCC sought comments on other technical aspects and challenges to building the network in its most recent proceeding, which FCC hopes will further promote and enable nationwide interoperability. FCC officials said they continue to monitor the waiver jurisdictions that are developing broadband networks to ensure they are meeting the network requirements by reviewing required reports and quarterly filings. Since 2010, DHS has: Partnered with FCC, Commerce, and the Department of Justice to conduct three forums for public safety agencies and others. These forums provided insight about the needs surrounding the establishment of a public safety broadband network as they relate to funding, governance, and the broadband market. Coordinated federal efforts on broadband implementation by bringing together the member agencies of ECPC. Also, ECPC updated its grant guidance for federal grant programs to clarify that broadband deployment is an allowable expense for emergency communications grant programs. These updates could result in more federal grant funding going to support the development of a broadband network. Updated its SAFECOM program’s grant guidance targeting grant applicants to include information pertaining to broadband deployment, based on input from state and local emergency responders. Worked with public safety entities to define the LTE standard and write educational materials about the broadband network. Partnered with state and regional groups and interoperability coordinators in preparing broadband guidance documentation. Represented federal emergency responders and advocated for sharing agreements between the federal government and the PSST that will enable federal users, such as responders from the Federal Emergency Management Agency, to access the broadband network. Since 2009, PSCR has: Worked with public safety agencies to develop requirements for the network and represents their interests before standards-setting organizations to help ensure public safety needs are met. Developed a demonstration broadband network that provides a realistic environment for public safety and industry to test and observe public safety LTE requirements on equipment designed for a broadband network. According to PSCR representatives, the demonstration network has successfully brought together more than 40 vendors, including manufacturers and wireless carriers. Among many goals, PSCR aims to demonstrate to public safety how the new technology can meet their needs and encourage vendors to share information and results. FCC requires the 22 waiver jurisdictions and their vendors to participate in PSCR’s demonstration network and provide feedback on the challenges they have faced while building the network. PSCR representatives told us that the lessons learned from the waiver jurisdictions would be applied to future deployments. Tested interoperable systems and devices and provided feedback to manufacturers. Currently, there are five manufacturers working with PSCR to develop and test systems and devices. With higher data speeds than the current LMR systems, a public safety broadband network could provide emergency responders with new video and data applications that are not currently available. Stakeholders we contacted, including waiver jurisdictions, emergency responders, and federal agencies, identified transmission of video as a key potential capability. For example, existing video from traffic cameras and police car mounted cameras could provide live video feeds for dispatchers. Dispatchers could use the video to help ensure that the proper personnel and necessary equipment are being deployed immediately to the scene of an emergency. Stakeholders we contacted predict that numerous data applications will be developed once a broadband network is complete, and that these applications will have the potential to further enhance incident response. These could range from a global positioning system application that provides directions based on traffic patterns to a 3D graphical floor plan display that supports firefighters’ efforts to battle building fires. In addition, unlike the current system, a public safety broadband network could provide access to existing databases of information, such as fire response plans and mug shots of wanted criminals, which could help to keep emergency responders and the public safe. As shown in figure 3, moving from lower bandwidth voice communications to a higher bandwidth broadband network unleashes the potential for the development of a range of public safety data applications. Besides new applications, a public safety broadband network has the potential to provide nationwide access and interoperability. Nationwide access means emergency responders and other public safety officials could access their home networks from anywhere in the country, which could facilitate a better coordinated emergency response. Interoperability on a broadband network could allow emergency responders to share information irrespective of jurisdiction or type of public safety agency. For example, officials from two waiver jurisdictions indicated that forest fires are a type of emergency that brings together multiple jurisdictions, and in these situations a broadband network could facilitate sharing of response plans. However, an expert we contacted stressed that broadband applications should be tailored to the bandwidth needs of the response task. For example, responders should not use high-definition video when grainy footage would suffice to enable them to pursue a criminal suspect. A major limitation of a public safety broadband network is that it would not provide mission critical voice communications for many years. LTE, the standard FCC identified for the public safety broadband network, is a wireless broadband standard that is not currently designed to support mission critical voice communications. Commercial wireless providers are currently developing voice over LTE capabilities, but this will not meet public safety’s mission critical voice requirements because key elements needed for mission critical voice, such as push-to-talk, are not part of the LTE standard. While one manufacturer believes mission critical voice over LTE will be available as soon as 5 years, some waiver jurisdictions, experts, government officials, and others told us it will likely be 10 years or more due to the challenges described in table 3. Absent mission critical voice capabilities on a broadband network, emergency responders will continue to rely on their current LMR voice systems, meaning a broadband network would supplement, rather than replace, LMR systems for the foreseeable future. Furthermore, until mission critical voice communications exist, issues that exacerbated emergency response efforts to the terrorist attacks on September 11, 2001—in particular, that emergency responders were not able to communicate between agencies—will not be resolved by a public safety broadband network. As a result, public safety agencies will continue to use devices operating on the current LMR systems for mission critical voice communications, and require spectrum to be allocated for that purpose. Additionally, public safety agencies may be reluctant to give up their LMR devices, especially if they were costly and are still functional. As jurisdictions continue to spend millions of dollars on their LMR networks and devices, they will likely continue to rely on such communication systems until they are no longer functional. In addition to not having mission critical communications, emergency responders may only have limited access to the public safety broadband network from the interior of large buildings. While the 700 MHz spectrum provides better penetration of buildings than other bands of the spectrum, if emergency responders expect to have access to the network from inside large buildings and underground, additional infrastructure will need to be constructed. For example, antennas or small indoor cellular stations could be installed inside buildings and in underground structures to support access to the network. FCC is seeking comment on this issue as part of its most recent proceeding. Without this added infrastructure, emergency responders using the broadband network may not have access to building blue prints or fire response plans during building emergencies, such as a fire. In fact, one jurisdiction constructing a broadband network that we visited told us their network would not support in-building access in one city of the jurisdiction because the plan did not include antennas for inside the buildings. A final limitation to a public safety broadband network could be its capacity during emergencies. Emergencies tend to happen in localized areas that may be served by a single cell tower or even a single cellular antenna on a tower. With emergency responders gathering to fight a fire or other emergency, the number of responders and the types of applications in use may exceed the capacity of the network. If the network reaches capacity it could overload and might not send life saving information. Therefore, the network would have to be managed during emergencies to ensure that the most important data are being sent, which could be accomplished by prioritizing data. Furthermore, capacity could be supplemented through deployable cell sites to emergency locations. Although the federal agencies have taken important steps to advance the broadband network, challenges exist that may slow its implementation. Specifically, stakeholders we spoke with prioritized five challenges to successfully building, operating, and maintaining a public safety broadband network. These challenges include (1) ensuring interoperability, (2) creating a governance structure, (3) building a reliable network, (4) designing a secure network, and (5) determining funding sources. FCC, in its Fourth Further Notice of Proposed Rulemaking, sought comment on some of these challenges, and as explained further, the challenge of creating a governance structure has been addressed by recent law. However, the other challenges currently remain unresolved and, if left unaddressed, could undermine the development of a public safety broadband network. Ensuring interoperability. To avoid a major shortcoming of the LMR communication systems, it is essential that a public safety broadband network be interoperable across jurisdictions and devices. DHS, in conjunction with its SAFECOM program, developed the Interoperability Continuum which identifies five key elements to interoperable networks— governance, standard operating procedures, technology, training, and usage—that waiver jurisdictions and other stakeholders discussed as important to building an interoperable public safety broadband network, as shown in figure 4. For example, technology is critical to interoperability of the broadband network and most stakeholders, including public safety associations, experts, and manufacturers believe that identifying LTE as the technical standard was a good step towards interoperability. To further promote interoperability, stakeholders indicated that additional technical functionality, such as data sharing and roaming capabilities, should be part of the technical design. If properly designed to the technical standard, broadband devices will support interoperability regardless of the manufacturer. Testing devices to ensure they meet the identified standard could help eliminate devices with proprietary applications that might otherwise limit interoperability. In its Fourth Further Notice, FCC solicited input on the technical design of the network and testing of devices to ensure interoperability. Creating a governance structure. As stated previously, governance is a key element for interoperable networks. A governance authority can promote interoperability by bringing together federal, state, local, and emergency response representatives. Each of the waiver jurisdictions we contacted had identified a governance authority to oversee its broadband network. Jurisdictions we visited, as well as federal agencies, told us that any nationwide network should also have a nationwide governance entity to oversee it. Although several federal entities are involved with the planning of a public safety broadband network, at the time we conducted our work no entity had overall authority to make critical decisions for its development, construction, and operation. According to stakeholders, decisions on developing a common language for the network, establishing user rights for federal agencies, and determining network upgrades, could be managed by such an entity. Pending legislation, the Middle Class Tax Relief and Job Creation Act of 2012, establishes a First Responder Network Authority as an independent authority within NTIA and gave it responsibility for ensuring the establishment of a nationwide, interoperable public safety broadband network. Among other things, the First Responder Network Authority is required to (1) ensure nationwide standards for use and access of the network; (2) issue open, transparent, and competitive requests for proposals to private sector entities to build, operate, and maintain the network; (3) encourage that such requests leverage existing commercial wireless infrastructure to speed deployment of the network; and (4) manage and oversee the implementation and execution of contracts and agreements with nonfederal entities to build, operate, and maintain the network. Building a reliable network. A public safety broadband network must be as reliable as the current LMR systems but it will require additional infrastructure to do so. As mentioned previously, emergency responders consider the current LMR systems very reliable, in part because they can continue to work in emergency situations. Any new broadband network would need to meet similar standards but, as shown in figure 5, such a network might require up to 10 times the number of towers as the current system. This is because a public safety broadband network is being designed as a cellular network, which would use a series of low powered towers to transmit signals and reduce interference. Also, to meet robust public safety standards, each tower must be “hardened” to ensure that it can withstand disasters, such as hurricanes and earthquakes. According to waiver jurisdictions and other stakeholders, this additional infrastructure and hardening of facilities may be financially prohibitive for many jurisdictions, especially those in rural areas that currently use devices operating on VHF spectrum—spectrum that is especially well suited to rural areas because the signals can travel long distances. Designing a secure network. Secure communications are important. Designing a protected and trusted broadband network will encourage increased usage and reliance on it. Security for a public safety network will require authentication and access control. By defining LTE as the technical standard for the broadband network, a significant portion of the security architecture is predetermined because the standard governs a certain level of security. Given the importance of this issue, FCC required waiver jurisdictions to include some security features in their networks and FCC’s most recent proceeding seeks input on security issues. Furthermore, FCC’s Public Safety Advisory Committee has issued a report making several security-related recommendations. For example, it recommended that standardized security features be in place to support roaming to commercial technologies. However, one expert we contacted expressed concern that the waiver jurisdictions were not establishing sufficient network security because they had not received guidance. He believes this would result in waiver jurisdictions using security standards applied to previous networks. Determining funding sources. It is estimated that a nationwide public safety broadband network could cost up to $15 billion or more to construct, which does not take into account recurring operation and maintenance costs. As noted previously, of the 22 waiver jurisdictions, 8 have received federal grants to support deployment of a broadband network. Some of the other waiver jurisdictions have obtained limited funding from nonfederal sources, such as through issuing bonds. Several of the jurisdictions we spoke with stressed that in addition to the upfront construction costs, the ongoing costs associated with operating, maintaining, and upgrading a public safety broadband network would need to be properly funded. As previously indicated, the ECPC and SAFECOM have updated grant guidance to reflect changing technologies but this does not add additional funding for emergency communications. Rather, it defines broadband as an allowable purpose for emergency communications funding grants that may currently support the existing LMR systems. Since the LMR systems will not be replaced by a public safety broadband network, funding will be necessary to operate, maintain, and upgrade two separate communication systems. Handheld LMR devices often cost thousands of dollars, and many stakeholders, including national public safety associations, state and local public safety officials, and representatives from the telecommunications industry, attribute these high prices to limited competition. Industry analysts and stakeholders estimate that the approximately $4 billion U.S. market for handheld LMR devices consists of one manufacturer with about 75 to 80 percent market share, one or two strong competitors, and several device manufacturers with smaller shares of the market. According to industry stakeholders, competition is weak because of limited entry by device manufacturers; this may be due to (1) the market’s relatively small size and (2) barriers to entry that confront nonincumbent device manufacturers. Small size of the public safety market. The market for handheld LMR devices in the United States includes only about 2 to 3 million customers, or roughly 1 percent of the approximately 300 million customers of commercial telecommunication devices. According to an industry estimate in 2009, approximately 300,000 handheld LMR devices that are P25 compliant are sold each year. Annual sales of handheld LMR devices are small in part because of low turnover. For example, device manufacturers told us that public safety devices are typically replaced every 10 to 15 years, suggesting that less than 10 percent of handheld LMR devices are replaced annually. In contrast, industry and public safety sources indicate that commercial customers replace devices roughly every 2 to 3 years, suggesting that about 33 to 50 percent are replaced annually. Together, low device turnover and a small customer base reduce the potential volume of sales by device manufacturers, which may make the market unattractive to potential entrants. The size of the market is reduced further by the need for manufacturers to customize handheld LMR devices for individual public safety agencies. Differences in spectrum allocations across jurisdictions have the effect of decreasing the customer base for any single device. As previously discussed, public safety agencies operate on different frequencies scattered across the radio spectrum. For example, one jurisdiction may need devices that operate on 700 MHz frequencies, whereas another jurisdiction may need devices that operate on both 800 MHz and 450 MHz frequencies. Existing handheld LMR devices typically do not transmit and receive signals in all public safety frequencies. As a result, device manufacturers cannot sell a single product to customers nationwide, and must tailor devices to the combinations of frequencies in use by the purchasing agency. Barriers to entry by nonincumbent manufacturers. Device manufacturers wishing to enter the handheld LMR device market face barriers in doing so, which further limits competition. The use of proprietary technologies represents one barrier to entry. The inclusion of proprietary technologies often makes LMR devices noninteroperable with one another. This lack of interoperability makes it costly for customers to switch the brand of their devices, since doing so requires them to replace or modify older devices. These switching costs may continually compel customers to buy devices from the incumbent device manufacturer, preventing less established manufacturers from making inroads into the market. For example, in a comment filed with FCC, one of the jurisdictions we visited said that device manufacturers offer a proprietary encryption feature for free or at only a nominal cost. When a public safety agency buys devices that incorporate this proprietary encryption feature, the agency cannot switch its procurement to a different manufacturer without undertaking costly modifications to its existing fleet of devices. Switching costs are particularly high when a device manufacturer has installed a communication system that is incompatible with competitors’ devices. In this scenario, a public safety agency cannot switch to a competitor’s handheld device without incurring the cost of new equipment or a patching mechanism to resolve the incompatibility. Even where devices from different manufacturers are compatible, a fear of incompatibility may deter agencies from switching to a nonincumbent brand. According to industry stakeholders—and as we have confirmed in the past—devices marketed as P25 compliant often are not interoperable This lack of confidence in the P25 standard may encourage in practice.agencies to continue buying handheld LMR devices from their current brand, placing less established device manufacturers at a disadvantage and thus discouraging competition. At the same time that less established manufacturers are at a disadvantage, the market leader enjoys distinct “incumbency advantages.” These advantages refer to the edge that a manufacturer derives from its position as incumbent, over and above whatever edge it derives from the strength of its product: According to an industry analyst, some public safety agencies are reluctant to switch brands of handheld LMR devices because their emergency responders are accustomed to the placement of the buttons on their existing devices. According to another industry analyst, the extensive network of customer representatives that the market leader has established over time presents an advantage. According to this analyst, less established device manufacturers face difficulty winning contracts because their networks of representatives are comparatively thin. The well-recognized brand of the market leader also represents an advantage. According to one stakeholder, some agencies mistakenly believe that only the market leader is able to manufacturer devices compliant with P25, and thus conduct sole-source procurements with this manufacturer. Even where procurements are competitive, the market leader is likely to enjoy an upper hand over its competitors; according to an industry analyst, local procurement officers prefer to buy handheld LMR devices from the dominant device manufacturer because doing so is an uncontroversial choice in the eyes of their management. Competition aside, handheld LMR devices are costly to manufacture, so their prices will likely exceed prices for commercial devices regardless of how much competition exists in the market. First, this is in large part because these devices need to be reinforced for high-pressure environments. Handheld LMR devices must be able to withstand extremes of temperature as well physical stressors such as dust, smoke, impact, and immersion in water. Second, they also have much more robust performance requirements than commercial devices––including greater transmitter and battery power––to enable communication at greater ranges and during extended periods of operation. Third, the devices are produced in quantities too small to realize the cost savings of mass production. Manufacturers of commercial telecommunication devices can keep prices lower simply because of the large quantities they produce. For example, one industry stakeholder told us that economies of scale begin for commercial devices when a million or more devices are produced per manufacturing run. In contrast, LMR devices are commonly produced in manufacturing runs of 25,000 units. Fourth, the exterior of handheld LMR devices must be customized to the needs of emergency responders. For example, the buttons on these devices must be large enough to press while wearing bulky gloves. In addition, given that the P25 standard remains incomplete and voluntary, device manufacturers develop products based on conflicting interpretations of the standard, resulting in incompatibilities between their products. Stakeholders from one jurisdiction we visited said that agencies can request add-on features––such as the ability to arrange channels according to user preference or to scan for radio channels assigned for particular purposes—which fall outside the P25 standard. These features increase the degree of customization required to produce handheld LMR devices, pushing costs upward. Furthermore, public safety agencies may be unable to negotiate lower prices for handheld LMR devices because they cannot exert buying power in relationship with device manufacturers. We found that public safety agencies are not in an advantageous position to negotiate lower prices because they often request customized features and negotiate with device manufacturers in isolation from one another. According to a public safety official in one jurisdiction we contacted, each agency has unique ordinances, purchasing mechanisms, and bidding processes for devices. Because public safety agencies contract for handheld LMR devices in this independent manner, they sacrifice the quantity discounts that come from placing larger orders. Moreover, they are unlikely to know what other agencies pay for similar devices, enabling device manufacturers to offer different prices to different jurisdictions rather than set a single price for the entire market. One public safety official told us that small jurisdictions therefore pay more than larger jurisdictions for similar devices. As we have reported in the past, agencies that require similar products can combine their market power—and therefore obtain lower prices—by engaging in joint procurement.procurement at the state, regional, or national level are likely to increase the buying power of public safety agencies and help bring down prices. Therefore, wider efforts to coordinate Although these factors drive up prices in the current market for handheld LMR devices, industry observers said that many of these factors diminish in the future market for handheld broadband devices. As described earlier, FCC has mandated a commercial standard, LTE, for devices operating on the new broadband networks. The use of this standard may reduce the prevalence of proprietary features that inhibit interoperability. In addition, the new broadband networks will operate on common 700 MHz spectrum across the nation, eliminating the need to customize devices to the frequencies in use by individual jurisdictions. Together, the adoption of a commercial standard and the use of common spectrum are likely to increase the uniformity of handheld public safety devices, which in turn is likely to strengthen competition and enable the cost savings that come from bulk production. In addition, industry analysts and federal officials told us that they expect a heightened level of competition in the market for LTE devices because multiple device manufacturers are expected to develop them. Options exist to reduce prices in the market for handheld LMR devices by increasing competition and the bargaining power of public safety agencies. One option is to reduce barriers to entry into the market. As described above, less established manufacturers may be discouraged from entering the market for handheld LMR devices because of the lack of interoperability between devices produced by different manufacturers. Consistent implementation of the P25 standard would increase interoperability between devices, enabling public safety agencies to mix and match handheld LMR devices from different brands. As we have reported in the past, independent testing is necessary to ensure compliance with standards and interoperability among products.past several years, NIST and DHS have established a Compliance In the Assessment Program (CAP) for the P25 standard. CAP provides a government-led forum in which to test devices for conformance with P25 specifications. If the CAP program succeeds in increasing interoperability, it may reduce switching costs—that is, the expense of changing manufacturers—and thus may open the door to greater competition. Although CAP is a promising means to lower costs in this way, it is too soon to assess its effectiveness. A second option is for public safety agencies to engage in joint procurement to lower costs. Joint procurement of handheld LMR devices could increase the bargaining power of agencies as well as facilitate cost savings through quantity discounts. One public safety official we interviewed said that while local agencies seek to maintain control over operational matters—such as which emergency responders operate on which channels—they are likely to cede control in procurement matters if As described earlier in this report, DHS provides doing so lowers costs. significant grant funding, technical assistance, and guidance to enhance the interoperability of LMR systems. For example, as described in its January 2012 Technical Assistance Catalog, DHS’s Office of Emergency Communications supports local public safety entities to ensure that LMR design documents meet P25 specifications and are written in a vendor- neutral manner. Based on its experience in emergency communications and its outreach to local public safety representatives, DHS is positioned to facilitate and incentivize opportunities for joint procurement of handheld LMR devices. An alternative approach to fostering joint procurement is through a federal supply schedule. In 2008, the Local Preparedness Acquisition Act, Pub. L. No. 110-248, 122 Stat. 2316 (2008), gave state and local governments the opportunity to buy emergency response equipment through GSA’s Cooperative Purchasing Program. The Cooperative Purchasing Program may provide a model for extending joint procurement to state and local public safety agencies. mission critical voice. As a result, a public safety broadband network would likely supplement, rather than replace, current LMR systems for the foreseeable future. Although a public safety broadband network could enhance incident response, it would have limitations and be costly to construct. Furthermore, since the LMR systems will still be operational for many years, funding will be necessary to operate, maintain, and upgrade two separate public safety communication systems. At the time of our work, there was not an administrative entity that had the authority to plan, oversee, or direct the public safety broadband spectrum. As a result, overarching management decisions had not been made to guide the development or deployment of a public safety broadband network. According to SAFECOM’s interoperability continuum, governance structures provide a framework for collaboration and decision making with the goal of achieving a common objective and therefore foster greater interoperability. In addition to ensuring interoperability, a governance entity with proper authority could help to address the challenges identified in this report, such as ensuring the network is secure and reliable. Pending legislation, the Middle Class Tax Relief and Job Creation Act of 2012, establishes an independent authority within NTIA to manage and oversee the implementation of a nationwide, interoperable public safety broadband network. Handheld communication devices used by public safety officials can cost thousands of dollars, mostly due to limited competition and high manufacturing costs. However, public safety agencies also lack buying power vis-à-vis the device manufacturers, which may result in the agencies overpaying for the devices. In particular, since public safety agencies negotiate individually with device manufacturers, they are unlikely to know what other agencies pay for comparable devices and they sacrifice the increased bargaining power and economies of scale that accompany joint purchasing. Especially in rural areas, public safety agencies may be overpaying for handheld devices. We have repeatedly recommended joint procurement as a cost saving measure for situations where agencies require similar products because it allows them to combine their market power and lower their procurement costs. Given that DHS has expertise in emergency communications and relationships with local public safety representatives, we believe it is well-suited to facilitate opportunities for joint procurement of handheld communication devices. To help ensure that public safety agencies are not overpaying for handheld communication devices, the Secretary of Homeland Security should work with federal and state partners to identify and communicate opportunities for joint procurement of public safety LMR devices. We provided a draft of this report to Commerce, DHS, the Department of Justice, and FCC for their review and comment. In the draft report we sent to the agencies, we included a matter for congressional consideration for ensuring that a public safety broadband network has adequate direction and oversight, such as by creating a governance structure that gives authority to an entity to define rules and develop a plan for the overarching management of the network. As a result of pending legislation that addresses this issue, we removed the matter for congressional consideration from the final report. Commerce provided written comments, reprinted in appendix III, in which it noted that NIST and NTIA will continue to collaborate with and support state, local, and tribal public safety agencies and other federal agencies to help achieve effective and efficient public safety communications. In commenting on the draft report, DHS concurred with our recommendation that it should work with federal and state partners to identify and communicate opportunities for joint procurement of public safety LMR devices. While DHS noted that this recommendation will not likely assist near-term efforts to implement a public safety broadband network, assisting efforts for the broadband network was not the intention of the recommendation. Rather, we intended this recommendation to help ensure that public safety agencies do not overpay for handheld LMR devices by encouraging joint procurement. DHS suggested in response to our recommendation that a GSA solution may be more appropriate than DHS contracting activity. Although we recognize that a GSA solution is one possibility for joint procurement of handheld LMR devices, other opportunities and solutions might exist. We believe DHS, based on its experience in emergency communications and its outreach to state and local public safety representatives, is best suited to identify such opportunities and solutions for joint procurement and communicate those to the public safety agencies. In its letter, DHS also noted that it continues to work with federal, state, local, and private-sector partners to facilitate the deployment of a nationwide public safety broadband network, and stressed that establishing an effective governance structure is crucial to ensuring interoperability and effective use of the network. DHS’s written comments are reprinted in appendix IV. Commerce, DHS, the Department of Justice, and FCC provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Attorney General, the Secretary of Commerce, the Chairman of FCC, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed on appendix V. This report examines current communication systems used by public safety and issues surrounding the development of a nationwide public safety broadband network. Specifically, we reviewed (1) the resources that have been provided for current public safety communication systems and their capabilities and limitations, (2) how a nationwide public safety broadband network is being planned and its anticipated capabilities and limitations, (3) the challenges to building a nationwide public safety broadband network, and (4) the factors that influence competition and cost in the development of public safety communication devices and the options that exist to reduce prices. To address all objectives, we conducted a literature review of 43 articles from governmental and academic sources on public safety communications. We reviewed these articles and recorded relevant evidence in workpapers, which informed our report findings. To identify existing studies, we conducted searches of various databases, such as EconLit, ProQuest, Academic OneFile, and Social SciSearch. We also pursued a snowball technique—following citations from relevant articles— to find other relevant articles and asked external researchers that we interviewed to recommend additional studies. These research methods produced 106 articles for initial review. We vetted this initial list by examining summary level information about each piece of literature, giving preference to articles that appeared in peer-reviewed journals and were germane to our research objectives. As a result, the 43 studies that we selected for our review met our criteria for relevance and quality. For the 13 articles related to our fourth objective—factors that affect competition and cost in the market for public safety communication devices—a GAO economist performed a secondary review and confirmed the relevance to our objective. Articles were then reviewed and evidence captured in workpapers. The workpapers were then reviewed for accuracy of the evidence gathered. We performed these searches and identified articles from June 2011 to September 2011. We also interviewed government officials or stakeholders in 6 of the 22 jurisdictions that are authorized to build early public safety broadband networks and obtained information concerning each objective. In particular, we obtained information concerning their current communication systems and its capabilities, including any funding received to support the current network. We discussed their plan for building a public safety broadband network and the challenge they had faced thus far, including the role each thought the federal government should play in developing a network. We also discussed their views on the communication device market and the factors shaping the market. We selected jurisdictions to contact based on three criteria: (1) whether the jurisdiction received grant funds from the National Telecommunications and Information Administration (NTIA) to help build the network, (2) whether the planned network would be a statewide or regional network, and (3) geographic distribution across the nation. Table 4 lists the jurisdictions we selected based on these criteria. We selected jurisdictions based on NTIA grant funding because these jurisdictions had received the most significant federal funds dedicated towards developing a broadband network. Other jurisdictions either had not identified any funding or applied smaller grant funding that was not primarily targeted at emergency communications. We selected the size of the network, statewide or regional, to determine if challenges differed based on the size of the network and the number of entities involved. Finally, we selected sites based on the geographic region to get a geographic mix of jurisdictions from around the country. In jurisdictions that received NTIA funding, we met with government officials and emergency responders. In jurisdictions that did not receive NTIA funding we met with the government officials since the network had not progressed as much. To determine the resources that have been provided for current public safety communication systems, we reviewed Federal Communications Commission (FCC) data on spectrum allocations for land mobile radio (LMR) systems. In addition, we reviewed relevant documentation and interviewed officials from offices within the Departments of Commerce (Commerce), Homeland Security (DHS), and Justice that administer grant programs or provide grants that identify public safety communications as an allowable expense. We selected these agencies to speak with because they had more grant programs providing funds or were regularly mentioned in interviews as providing funds for public safety communications. We also reviewed documents from agencies, such as the Departments of Agriculture and Transportation, which similarly operate grant programs that identify public safety communications as an allowable expense. The grants were identified by DHS’s SAFECOM program as grants that can support public safety communications. To identify the capabilities and limitations of current public safety communication systems, we reviewed relevant congressional testimonies, academic articles on the capabilities and limitations of LMR networks, and relevant federal agency documents, including DHS’s National Emergency Communications Plan. We interviewed officials from three national public safety associations—the Association of Public-Safety Communications Officials (APCO), National Public Safety Telecommunications Council (NPSTC), and the Public Safety Spectrum Trust (PSST)—as well as researchers and consultants referred to us for their knowledge of public safety communications and identified during the literature review process. To determine the plans for a nationwide public safety broadband network and its expected capabilities and limitations, we reviewed relevant congressional testimonies and academic articles on services and applications likely to operate on a public safety broadband network, the challenges to building, operating, and maintaining a network. We interviewed officials from APCO, NPSTC, and PSST, as well as researchers and consultants who specialize in public safety communications to understand the potential capabilities of the network. In addition, we reviewed FCC orders and notices of proposed rulemaking relating to broadband for public safety, as well as comments on this topic submitted to FCC. To determine the federal role in the public safety broadband network, we interviewed multiple agencies involved in planning this network. Within FCC, we interviewed officials from the Public Safety and Homeland Security Bureau (PSHSB), the mission of which is to ensure public safety and homeland security by advancing state-of-the-art communications that are accessible, reliable, resilient, and secure, in coordination with public and private partners. Within Commerce, we interviewed officials from NTIA and the National Institute of Standards and Technology (NIST), two agencies that develop, test, and advise on broadband standards for public safety. We also interviewed officials from the Public Safety Communications Research (PSCR) program, a joint effort between NIST and NTIA that works to research, develop, and test public safety communication technologies. Within DHS, we interviewed officials from the Office of Emergency Communications (OEC) and the Office of Interoperability and Compatibility (OIC), two agencies that provide input on the public safety broadband network through their participation on interagency coordinating bodies. To determine the technological, historical, and other factors that affect competition in the market for public safety devices, as well as what options exist to reduce the cost of these devices, we reviewed the responses to FCC’s notice seeking comment on competition in public safety communications technologies. In addition, we reviewed our prior reports and correspondence on this topic between FCC and the House of Representatives Committee on Energy and Commerce that occurred in June and July of 2010 and April and May of 2011. We also conducted an economic literature review that included 13 academic articles examining markets for communications technology and, in particular, how issues of standards, compatibility, bundling, and price discrimination affect entry and competition in these markets. These articles provided a historical and theoretical context for communication technology markets, which helped shape our findings. We asked about factors affecting the price of public safety devices, as well as how to reduce these prices, during our interviews with national public safety organizations, local and regional public safety jurisdictions, and the federal agencies we contacted during our audit work. We also interviewed two researchers specifically identified for their knowledge of communication equipment markets based on their congressional testimony or publication history. In addition, we interviewed representatives from four companies that produce public safety devices or network components, as well as two financial analysts who track the industry. We conducted this performance audit from March 2011 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. SAFECOM, a program administered by DHS, has identified federal grant programs across nine agencies, including the Departments of Agriculture, Commerce, Education, Health and Human Services, Homeland Security, Interior, Justice, Transportation, and the U.S. Navy that allow grant funds to fund public safety emergency communications efforts. These grants include recurring grants that support emergency communications, research grants that fund innovative and pilot projects, and past grants that may be funding ongoing projects. While the funding from these grants can support emergency communications, the total funding reported does not mean it was all spent on emergency communications. We provided the amounts of the grants and the years funded when this information was available. Two agencies within Commerce—NTIA and NIST—administer grants that allow funds to be directed towards public safety emergency communications (see table 5). Two agencies within DHS administer grants that allow funds to be directed towards public safety emergency communications—the Federal Emergency Management Agency (FEMA) and the Science and Technology Directorate. Another agency, OEC, has administered one such grant program. Furthermore, DHS maintains an authorized equipment list to document equipment eligible for purchase under its grant programs, including interoperable communications equipment.(See table 6.) Two offices within Department of Justice, the Community Oriented Policing Services (COPS) and the Office of Justice Programs (OJP), administer grants that allow funds to be directed towards public safety emergency communications (see table 7). Six additional federal agencies administer grants that can fund public safety emergency communications, including the Departments of Agriculture (USDA), Transportation (DOT), Health and Human Services (HHS), Education, Interior, and the U.S. Navy (see table 8). In addition to the contact named above, Sally Moino, Assistant Director; Namita Bhatia-Sabharwal; Dave Hooper; Eric Hudson; Josh Ormond; Bonnie Pignatiello Leer; Ellen Ramachandran; Andrew Stavisky; Hai Tran; and Mindi Weisenbloom made significant contributions to this report. Emergency Communications: National Communications System Provides Programs for Priority Calling, but Planning for New Initiatives and Performance Measurement Could Be Strengthened. GAO-09-822. Washington, D.C.: August 28, 2009. Emergency Communications: Vulnerabilities Remain and Limited Collaboration and Monitoring Hamper Federal Efforts. GAO-09-604. Washington, D.C.: June 26, 2009. First Responders: Much Work Remains to Improve Communications Interoperability. GAO-07-301. Washington, D.C.: April 2, 2007. Homeland Security: Federal Leadership and Intergovernmental Cooperation Required to Achieve First Responder Interoperable Communications. GAO-04-740. Washington, D.C.: July 20, 2004. Project SAFECOM: Key Cross-Agency Emergency Communications Effort Requires Stronger Collaboration. GAO-04-494. Washington, D.C.: April 16, 2004. Homeland Security: Challenges in Achieving Interoperable Communications for First Responders. GAO-04-231T. Washington, D.C.: November 6, 2003.
Emergency responders across the nation rely on land mobile radio (LMR) systems to gather and share information and coordinate their response efforts during emergencies. These public safety communication systems are fragmented across thousands of federal, state, and local jurisdictions and often lack “interoperability,” or the ability to communicate across agencies and jurisdictions. To supplement the LMR systems, in 2007, radio frequency spectrum was dedicated for a nationwide public safety broadband network. Presently, 22 jurisdictions around the nation have obtained permission to build public safety broadband networks on the original spectrum assigned for broadband use. This requested report examines (1) the investments in and capabilities of LMR systems; (2) plans for a public safety broadband network and its expected capabilities and limitations; (3) challenges to building this network; and (4) factors that affect the prices of handheld LMR devices. GAO conducted a literature review, visited jurisdictions building broadband networks, and interviewed federal, industry, and public safety stakeholders, as well as academics and experts. After the investment of significant resources—including billions of dollars in federal grants and approximately 100 megahertz of radio frequency spectrum—the current land mobile radio (LMR) systems in use by public safety provide reliable “mission critical” voice capabilities. For public safety, mission critical voice communications must meet a high standard for reliability, redundancy, capacity, and flexibility. Although these LMR systems provide some data services, such as text and images, their ability to transmit data is limited by the channels on which they operate. According to the Department of Homeland Security (DHS), interoperability among LMR systems has improved due to its efforts, but full interoperability of LMR systems remains a distant goal. Multiple federal entities are involved with planning a public safety broadband network and while such a network would likely enhance interoperability and increase data transfer rates, it would not support mission critical voice capabilities for years to come, perhaps even 10 years or more. A broadband network could enable emergency responders to access video and data applications that improve incident response. Yet because the technology standard for the proposed broadband network does not support mission critical voice capabilities, first responders will continue to rely on their current LMR systems for the foreseeable future. Thus, a broadband network would supplement, rather than replace, current public safety communication systems. There are several challenges to implementing a public safety broadband network, including ensuring the network’s interoperability, reliability, and security; obtaining adequate funds to build and maintain it; and creating a governance structure. For example, to avoid a major shortcoming of the LMR systems, it is essential that a public safety broadband network be interoperable across jurisdictions and devices by following five key elements to interoperable networks: governance, standard operating procedures, technology, training, and usage. With respect to creating a governance structure, pending legislation—the Middle Class Tax Relief and Job Creation Act of 2012, among other things—establishes a new entity, the First Responder Network Authority, with responsibility for ensuring the establishment of a nationwide, interoperable public safety broadband network. The price of handheld LMR devices is high—often thousands of dollars—in part because market competition is limited and manufacturing costs are high. Further, GAO found that public safety agencies cannot exert buying power in relationship to device manufacturers, which may result in the agencies overpaying for LMR devices. In particular, because public safety agencies contract for LMR devices independently from one another, they are not in a strong position to negotiate lower prices and forego the quantity discounts that accompany larger orders. For similar situations, GAO has recommended joint procurement as a cost saving measure because it allows agencies requiring similar products to combine their purchase power and lower their procurement costs. Given that DHS has experience in emergency communications and relationships with public safety agencies, it is well-suited to facilitate joint procurement of handheld LMR devices. The Department of Homeland Security (DHS) should work with partners to identify and communicate opportunities for joint procurement of public safety LMR devices. In commenting on a draft of this report, DHS agreed with the recommendation. GAO also received technical comments, which have been incorporated, as appropriate, in the report.
Geography and geospatial or location-based technologies are ubiquitous in daily life, from the navigation units in cars to applications on smart phones. These technologies, which include global positioning systems (GPS) and geographic information systems (GIS), are used in a myriad of ways, from crisis mapping in Haitian earthquake relief efforts to deciding where to locate supermarkets in underserved communities in Philadelphia (see sidebar). According to the Department of Labor, employment of specialists in geography, or geographers, is projected to grow 29 percent from 2012 to 2022—much faster than the average 11 percent growth for all occupations. As we recently reported, the federal government collects, maintains, and uses technology that depends on geography to support national security, law enforcement, health care, environmental protection, and natural resources conservation. Among the many activities that can depend on analysis of geospatial data are maintaining roads and other critical transportation infrastructures, quickly responding to natural disasters, such as floods, hurricanes, and fires, and tracking endangered species (see fig. 1). According to The National Geographic Society, the subject of geography can include a disproportionate focus on dates, events, and individuals. However, geography—the study of places and the relationship between people and their environment—is an integral tool used to understand and make informed decisions about global problems. Currently in elementary and middle grades, geography content is often combined and taught with history, civics, and economics, under the umbrella of “social studies.” High schools can offer geography as a separate course or within the social studies curriculum. High schools can also offer the advanced placement exam in Human Geography to students in grades 9 through 12. Although geography is taught as a part of social studies in most U.S. schools, it is an integrated discipline that can be taught with other subjects, such as science, technology, engineering, and mathematics (STEM). Some stakeholder groups have stated that aspects of the discipline are, in fact, STEM-related, and several federal agencies include geography, GIS, or cartography as STEM disciplines. The Office of Science and Technology Policy, which coordinates federal investments in STEM education programs, also includes geographic sciences/geography and geographic technologies as part of the STEM disciplines. Although some STEM education programs may have components that support K- 12 geography education, according to the Office of Science and Technology Policy there are no federal STEM education programs that directly support K-12 geography education. The National Center for Education Statistics (NCES) in Education tracks what students know and how well they understand and apply geography concepts as part of the National Assessment of Educational Progress (NAEP), a nationally-representative student assessment in various subject areas taken in grades 4, 8, and 12. The NAEP geography assessment is built around a “geography framework” that determines the content assessed. The geography framework melds key physical science and social science aspects of geography, and contains a content dimension and a cognitive (or thinking) dimension (see fig. 2). The framework also includes achievement levels that describe what students should know and be able to do to reach basic, proficient, and advanced levels of achievement. NAEP defines three achievement levels applicable to each grade assessed in geography. “Basic” denotes partial mastery of the knowledge and thinking skills to perform adequate work in grades 4, 8, and 12. “Proficient” represents solid academic performance and competency over challenging subject matter. “Advanced” represents performance that is equal to that expected of top students in other industrialized nations. The National Assessment Governing Board (Governing Board), established by law to set policy for NAEP, believes, however, that all students should reach the proficient level; the basic level is not the desired goal, but rather represents partial mastery that is a step toward proficient. NCES is required by law to conduct national assessments in reading and mathematics at least once every two years in grades 4 and 8, and at “regularly scheduled intervals” in grade 12. To the extent that time and resources allow, Education may conduct assessments in grades 4, 8, and 12 at regularly scheduled intervals in additional subjects, including writing, science, history, geography, civics, economics, foreign language, and arts. The Governing Board is responsible for determining the subjects and grades to be tested, in accordance with provisions in the NAEP statute. Geography is one of 10 core academic subjects as defined in the Elementary and Secondary Education Act of 1965 (ESEA), but is not identified as a subject that states must test under the Act. ESEA requires states to demonstrate progress toward the goal of all students meeting academic achievement standards while closing achievement gaps among various groups of students, such as economically disadvantaged students. To enable states to show progress, the law contains testing and reporting requirements for reading/language arts, math, and science. However, states may still require assessments in other subject areas, such as geography. In addition, ESEA contains requirements related to NAEP. For example, in order to receive a subgrant under Title I, Part A of ESEA, a school district must have a plan on file with a state educational agency that includes, among other things, an assurance by the school district that the district will participate, if selected, in the NAEP fourth and eighth grade reading and math assessments. Most eighth grade students—about three-quarters—in 2014 scored below the proficient level, indicating partial or less than partial mastery in geography, according to our analysis of Education’s nationally representative NAEP data (see fig. 3). This finding was consistent with fourth and 12th grade geography proficiency levels in prior assessments, according to Education’s 2010 publication on NAEP. For example, in 2010, only 21 percent of fourth graders and 20 percent of 12th graders performed at or above the proficient level. In 2014, certain groups of eighth grade students outperformed others. For example, students who were not eligible for the federal free or reduced-price lunch scored, on average, higher than students who were eligible, while students in private schools slightly outperformed students in public schools. However, average test scores for all of these student groups were below the proficient level. Since the geography assessment using the current framework was first administered in 1994, the average test scores among eighth grade students have shown no change, with average scores for all students nationwide remaining below proficient for 20 years. However, certain student groups made modest gains in achievement. For example, average test scores in geography increased for White, Black, and Hispanic students since 1994 (see fig. 4). Further, average test scores appeared to have increased among Asian/Pacific Islander students, but unlike the increases for the other groups, these increases were not statistically significant. Additionally, the gap in average test scores between both White and Black students and White and Hispanic students narrowed slightly since the test was first administered. Data on student access to geography education showed that a small portion of instruction time is spent on the subject. Our analysis of 2014 teacher survey data, a component of the NAEP geography assessment, showed that 50 percent of eighth grade teachers reported spending 3 to 5 hours per week of classroom instruction time on social studies—the vehicle through which geography is taught. Of those teachers spending 3 to 5 hours per week of classroom instruction time on social studies, more than half reported that “10 percent or less” of their social studies time was spent on geography. In addition, half of all eighth grade students in 2014 reported learning about geography “a few times a year” or “hardly ever.” Further, NAEP’s data collection methodologies allow for analyses of geography instruction time by race/ethnicity, gender, school lunch program eligibility, and attendance at private or public school; generally we observed no significant differences in these areas among teachers who reported spending “10 percent or less” of their social studies time on geography (see fig. 5). We also found that instruction time spent on geography has remained largely the same since 2010. As part of the 2014 NAEP teacher survey, teachers nationwide also reported varying degrees of frequency when teaching geography skills to eighth grade students and using technology. Teachers more often reported teaching geography skills—such as spatial dynamics and connections, use of maps and globes, and other countries and cultures— once or twice a month than more frequent intervals. In addition, as part of the 2014 NAEP survey, teachers reported using technology to teach geography to varying degrees. For example, when teaching geography, over half of teachers reported using computers to a “moderate or large extent” and one-third of teachers reported using computers to a “small extent” or “not at all.” Variations in state requirements in geography can limit student access to geography education. According to one university research center’s 2013 survey of states’ geography education requirements, most states did not require geography courses in middle school and high school. According to this survey, only 17 states required a geography course in middle school and 10 states required a geography course for students to graduate from high school. (See fig. 6) As found in our previous work, states, schools, and teachers continue to focus instruction time on subjects other than geography. Specifically, as schools spend more time improving students’ reading, math, and science skills to meet federal accountability requirements, there are concerns that other subjects might be reduced or eliminated. Similarly, prior studies from Education show that instruction time in reading/English and math has increased over past decades, while instruction time in social studies—the vehicle through which geography is generally taught—has declined. Officials from national organizations we interviewed also expressed concern that geography education has not been given the same national priority as reading, math, and science—subjects associated with federal testing requirements under Title I, Part A of ESEA. State education officials and K-12 teachers we interviewed echoed this sentiment, stating that allocating resources for geography education was challenging in the face of greater national and state focus on tested subjects. Officials from all four state educational agencies with which we conducted interviews told us they faced challenges in ensuring that geography standards remained an integral part of the state curriculum. For example, one state official told us how the state had eliminated geography from the curriculum for over a decade, and only recently added geography courses back amid concerns from the community that students were lacking essential geography skills. Similarly, all 10 teachers we spoke with reported that geography instruction has decreased in recent years due to a greater emphasis on teaching math and reading. Half of the 10 teachers described pressures to improve student test scores in reading and math, which hindered their ability to devote time to social studies and geography—subjects that generally do not have required tests. Among the 10 teachers we interviewed, almost all described not having sufficient time to teach geography as the top challenge to providing students with a geography education. Five of the 10 teachers also reported that teaching geography was not viewed as important in their district or school. For example, one teacher said she was told that her students’ test scores in geography did not “count” and two of the geography teachers expressed concern about losing their jobs because geography and social studies courses were likely being removed from the curriculum. As shown in figure 7, officials and stakeholders we interviewed, as well as key reports on the subject, cited other challenges to providing geography education. Perception of Geography – There is a common misconception about what geography entails, according to officials we interviewed and relevant reports we reviewed. Even key education stakeholders such as teachers, principals, and parents mistakenly think geography education involves fact-based memorization, according to officials we interviewed from three of our selected national geography organizations. According to one report, this view has likely persisted because it was the stakeholders’ personal experience when they were students. Officials we interviewed from the three geography organizations reported that this limited view of geography has made the subject easier to dismiss or ignore. In addition, among the state officials and teachers we interviewed, several confirmed that this public perception of geography makes obtaining support and resources more difficult. Teacher Preparation – Many teachers who teach geography do not have an educational background in the subject and take few, if any, geography courses in college, according to a 2013 geography education report. Teachers and state education officials we interviewed similarly stated that not enough professional development is devoted to the subject, and many teachers are not comfortable teaching geography. Our analysis of Education’s 2014 NAEP data shows variation in the amount and types of professional development activities that teachers reported participating in during the previous two years related to the teaching of civics, geography, history, or social studies. Instructional Materials – Geography instructional materials do not often showcase the depth and breadth of the subject or offer hands- on learning opportunities for students, according to officials from national groups we interviewed. One report noted that K-12 geography instructional materials—which largely consist of textbooks, atlases, and ancillary materials, such as workbooks or teacher PowerPoint presentations—must move beyond activities that require students to solely label maps. However, all of the state officials we interviewed said many districts do not have the resources to upgrade their instructional materials in geography. In addition, among the 10 K- 12 teachers we interviewed, seven reported they do not have time to search for better lessons and materials in geography, although they acknowledged that there are many resources and materials available online at low or no cost. The report also noted that when teachers are not well-prepared to teach geography, instructional materials become especially important. Technology – While technology can present opportunities to better engage students, using technology to teach geography can also present challenges. Our analysis of Education’s NAEP data show that in 2014 more teachers reported using technology-based geography instruction than in 2010. One report notes that there are a growing number of K-12 geography lessons using geospatial technologies, such as GIS and remote sensing; however, geography classrooms still largely depend on textbooks and lecture-based instruction. Further, officials we interviewed from three of our selected states described wide variation in the use of technology across districts and schools and expressed concern that many teachers did not receive training on how to incorporate the technology into their lessons. Among the 10 teachers we interviewed, seven reported frustrations with using technology to teach geography, such as outdated software, lack of technical support at their school, and poor internet connections. Other Challenges – State officials and teachers we interviewed also identified other challenges. For example, obtaining support from parents, state and local education leaders, and other external parties can prove difficult when revising geography standards or dedicating resources, according to officials we interviewed from two of our selected states. Five of the teachers we interviewed also described challenges trying to acquire resources for geography education, such as classroom materials, field trips, or technology. Education’s role with respect to geography education primarily involves assessing student performance in the subject, and providing data and the results of its analysis to the public. Education’s National Center for Education Statistics (NCES) periodically administers the NAEP geography assessment. NCES communicates the findings of NAEP to the public by publishing the Nation’s Report Card, which provides summary information on how well students are performing in geography based on a common test. NCES also provides restricted-use and public-use NAEP data and tools for running statistical analyses, and guidance on how researchers and other stakeholders can use these data for their own analyses. Along with the Nation’s Report Card, the NAEP frameworks, as developed by the National Assessment Governing Board, outline the content and skills assessed in each NAEP subject. These NAEP frameworks are available to the public and can serve as guidelines for planning other assessments or revising curricula. Officials we interviewed from two of our four selected states reported looking at the NAEP geography frameworks when revising state geography standards and curricula. However, since its development in 1994, the NAEP geography assessment has not been administered as regularly as reading and mathematics because assessing achievement in geography is not required by law. Specifically, the NAEP geography assessment has been conducted in 1994, 2001, 2010, and 2014. As a result of budget constraints, the 2014 NAEP geography assessment tested eighth grade students but not fourth and 12th grade students, as had been the case for previous assessments (see fig. 8). According to NCES officials we interviewed, the eighth grade was chosen because most students would have exposure to geography by that grade level. Officials said there was a risk that fourth graders may not have been exposed to geography and insufficient funding precluded them from assessing students in the 12th grade. Officials we interviewed from four of our selected national geography organizations reported that not assessing fourth and 12th grade students in geography will hinder research efforts related to K-12 geography education by making trend analyses for these grade levels impossible. The Governing Board and NCES officials told us there are plans to bolster the 2018 NAEP in geography, contingent on funding. NCES plans to test 12th grade students in the next geography assessment, and officials told us it is important to assess the geography skills students have upon leaving high school. In addition, officials said if funding allows, that the 2018 geography NAEP will be digitally based and administered on tablets. NCES officials told us the geography assessment, and content in particular, can benefit from the digital format by allowing the use of more interactive maps and geographic technology. Other than NAEP, Education has no initiatives or programs specific to K- 12 geography education. Officials told us that although the agency previously supported some activities related to geography through a program focused on civics, funding for this program is no longer available. Similarly, Education funded a research grant involving the use of software in social studies and geography classrooms, but this project ended in 2011. In addition, Education’s technology initiatives do not incorporate or include information about geography education or geographic technology in K-12 classrooms. As part of the President’s ConnectED Initiative, Education’s Office of Technology provides general guidance to states, school districts, and schools on how existing federal funds can support digital learning. However, this office does not provide guidance specific to geographic technologies in the classroom. Further, Education officials said they sometimes receive requests from external parties, including teachers and school districts, asking about resources for geography. In the absence of dedicated program funding for geography education, agency staff responds to such requests themselves and sometimes directs the parties to national geography groups, like the National Geographic Society. We provided a draft of this report to the Department of Education for review and comment. Education provided technical comments, which we incorporated as appropriate. We are sending copies to the Secretary of Education and appropriate congressional committees. The report will also be available at no charge on the GAO website at www.gao.gov. If you or your staff members have any questions, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This appendix discusses in detail our methodology for addressing three research questions for geography education: (1) To what extent do K-12 students have proficiency in and access to geography education? (2) What challenges do selected school officials and teachers face in providing geography education? and (3) What role does the Department of Education have with respect to geography education? To address these questions, we relied on multiple methodologies. We reviewed relevant federal laws, policies, and guidance. We analyzed nationally representative Education data on student proficiency in and access to geography education. We determined that the data were sufficiently reliable for the purposes of this report by testing it for accuracy and completeness, reviewing documentation about systems used to produce the data, and interviewing agency officials. We conducted outreach with four states, which were selected to provide variation in geography requirements. In addition, we reviewed selected studies and research reports, including a report with results from a 50-state survey of geography requirements. We also conducted interviews with officials from the Department of Education, state officials, and a nonrepresentative group of K-12 social studies and geography teachers. To determine the federal role with respect to geography education, we reviewed relevant federal laws, including the most recent reauthorization of the Elementary and Secondary Education Act of 1965 and the National Assessment of Educational Progress Authorization Act. We also reviewed relevant federal policies and guidance related to geography education. To address the extent to which K-12 students are proficient in and have access to geography education, we analyzed Department of Education data from nationally representative samples of public and nonpublic school students in grades 4, 8, and 12 from the National Assessment of Educational Progress (NAEP) in geography for 1994, 2001, 2010, and 2014—the years in which the NAEP assessments in geography were administered. In this report, we presented data on eighth grade students as it was the only grade for which data were available from 1994 through 2014, the most recent NAEP assessment in geography. We compared differences in proficiency in and access to geography education across student demographics, including race, gender, and poverty measures. Our analysis was based on reported proficiency as determined by the national NAEP geography assessment scores as well as data collected from the NAEP student, teacher, and school survey questionnaires. Eighty-five percent of teachers who responded to the 2014 NAEP teacher questionnaire only taught social studies, while the remaining taught all or most subjects or taught in teams. To provide information on students’ access to geography education, we analyzed and reported on access as measured by (1) teacher and student-reported instruction time spent on geography in the classroom as well as (2) exposure to other geography- related skills and topics, such as spatial dynamics and using maps and globes, as reported by teachers. To gather more in-depth information on the challenges school officials face in providing geography education and state-level geography education requirements, we conducted interviews with officials in four states—Arkansas, California, Florida, and Virginia. States were selected to provide variation in geography education standards, curricula, and requirements at the K-12 level. For each of these states, we gathered information from officials at state departments of education. In addition, we reviewed key documents related to state K-12 education requirements, curricula, and standards in geography. Our findings associated with our outreach to these selected states cannot be generalized to all states’ K-12 education population; rather, they provide insight into a range of views and experiences with K-12 geography education. To supplement our analysis of nationally representative K-12 data on student proficiency in and access to geography education, we reviewed key reports, studies, and research related to K-12 geography education. These reports and studies were identified through online searches of relevant material and through recommendations from stakeholders who we interviewed. In addition, we also reviewed the results of several Education surveys on changes in access, or instruction time, across different subjects. These surveys were recommended by National Center for Education Statistics officials and included the Schools and Staffing Survey, a system of related questionnaires that provides descriptive data on the context of elementary and secondary education, and the National Longitudinal Study of No Child Left Behind, which collected nationally- representative data on changes in instruction time among elementary school teachers. To gain a national picture of K-12 geography standards and requirements, we analyzed state-level information from a 2013 survey conducted by Texas State University’s Gilbert M. Grosvenor Center for Geographic Education. This survey collected self-reported information from 50 states and the District of Columbia on middle school and high school geography requirements. We reviewed the survey instrument and study methodology and interviewed officials responsible for administering the survey. We determined that these data were sufficiently reliable for the purposes of this report. To further understand the role the Department of Education plays with respect to geography education, we interviewed Education officials as well as relevant national groups. At Education, we interviewed officials from the Office of Elementary and Secondary Education, the Office of Educational Technology, the National Assessment Governing Board, and the National Center for Education Statistics. We also coordinated with officials from the Office of Science and Technology Policy to identify any science, technology, engineering, or mathematics initiatives that include geography education. Further, to describe the challenges school officials face in providing geography education, we conducted interviews with 10 K-12 social studies and geography teachers. These teachers were identified with assistance from the National Council for Social Studies—an organization with a membership of approximately 15,000 teachers of history, civics, geography, and related subjects. In response to our request, staff at the National Council for Social Studies sent an interview request, prepared by GAO, to 1,150 of their members. These members received an email, at random, with equal proportions falling in the elementary, middle, and high school levels. We conducted a total of 10 interviews with teachers who responded to this email request. While teachers self-selected for our interviews provide some insight into the views and experiences of teachers, their responses are not generalizable to all teachers of geography. We also coordinated with or interviewed representatives from a broad range of national groups in K-12 geography education and K-12 education, including: National Geographic Society, Association of American Geographers, National Council for Geographic Education, American Geographic Society, National Council for Social Studies, National Science Teachers Association, Council of Chief State School Officers, Grosvenor Center for Geographic Education, RAND, and ESRI. We conducted this performance audit from December 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Sherri Doughty (Assistant Director), Alison Gerry Grantham (Analyst-in-Charge), Claudine Pauselli, and Josiah Williams made key contributions to this report. Also contributing to this report were James Ashley, Nabajyoti Barkakati, Deborah Bland, Holly Dye, Kirsten Lauber, John Mingus, Tom Moscovitch, Mimi Nguyen, Karen O’Conor, Gloria Proa, and James Rebbe.
Geography—the study of places and the relationship between people and their environment—is present across many facets of modern life, from tracking lost cell phones to monitoring disease outbreaks like Ebola. The growing use of geographic information and location-based technology across multiple sectors of the American economy has prompted questions about whether K-12 students' skills and exposure to geography are adequate for current and future workforce needs. Senate Report 113-71 included a provision for GAO to report on the status of geography education and challenges elementary and secondary schools face in providing geography education with limited resources. In this report, GAO examined (1) the extent that eighth grade students are proficient in geography; (2) the challenges selected school officials and teachers face in providing geography education; and (3) the role of the Department of Education with respect to geography education. GAO reviewed relevant federal laws; analyzed nationally representative Education data on student proficiency and instruction time in geography; interviewed education officials in four states selected, in part, for varying K-12 geography requirements; reviewed key studies and research reports, including a 50-state 2013 survey of geography requirements; and interviewed agency officials and researchers. We also leveraged a professional association to identify and interview 10 K-12 teachers. GAO is not making recommendations in this report. Education provided technical comments, which we incorporated as appropriate. About three-quarters of eighth grade students—the only grade for which trend data are available—were not “proficient” in geography in 2014, according to GAO's analysis of nationally representative data from the Department of Education (Education). Specifically, these students had not demonstrated solid competence in the subject, and the proficiency levels of eighth grade students have shown no improvement since 1994 (see figure). Geography is generally taught as part of social studies, but data show that more than half of eighth grade teachers reported spending a small portion (10 percent or less) of their social studies instruction time on geography. Further, according to a study by an academic organization, a majority of states do not require geography courses in middle school or high school. A key challenge to providing geography education is the increased focus on other subjects, according to officials in selected states and K-12 teachers GAO interviewed. These officials and teachers said spending time and resources on geography education is difficult due to national and state focus on the tested subjects of reading, math, and science. GAO's interviews and review of relevant reports identified a range of other challenges, as well, including: misconceptions about what geography education entails; lack of teacher preparation and professional development in geography; poor quality of geography instructional materials; and limited use of geographic technology in the classroom. Education's role with respect to geography education primarily involves assessing student performance in the subject, and providing data and the results of its analyses to the public. Education periodically assesses student achievement in geography, and other areas, but not with the same regularity as other subjects it is required by law to assess. Beyond assessments, Education officials said that absent funding specifically for geography-focused programs, the agency is hindered in its ability to support geography education.
The F-22 is an air superiority aircraft with the capability to deliver air-to-ground weapons. The most significant advanced technology features include supercruise, the ability to fly efficiently at supersonic speeds without using fuel-consuming afterburners; low observability to adversary systems; and integrated avionics to significantly improve the pilot’s situational awareness. The objectives of the F-22 EMD program, begun in 1991, are to (1) design, fabricate, test, and deliver 9 F-22 flight test vehicles, 2 ground test articles, and 26 flight qualified engines; (2) design, fabricate, integrate, and test the avionics suite; and (3) design, develop, and test the F-22 system support and training systems. In June 1996, because of indications of potential cost growth on the F-22 program, the Assistant Secretary of the Air Force for Acquisition chartered a Joint Estimating Team (JET) consisting of personnel from the Air Force, the Department of Defense (DOD), and private industry. The objectives of the JET were to estimate the most probable cost of the F-22 program and to identify realistic initiatives that could be implemented to lower program costs. In January 1997, the JET estimated the F-22 EMD program would cost $18.688 billion, an increase of about $1.45 billion over the previous Air Force estimate. The JET also reported that additional time would be required to complete the EMD program and recommended changes to the EMD schedule. The Air Force and Under Secretary of Defense for Acquisition and Technology adopted the JET’s recommendations, including its cost estimate for the EMD program. Other JET recommendations included slowing the manufacturing of the EMD aircraft to ensure an efficient transition from development to low-rate initial production and increasing the time available to develop and integrate avionics software. In August and September 1997, the Air Force negotiated changes with the prime contractors to more closely align the cost-plus-award-fee contracts with the JET cost estimate and revised schedule. However, as of January 1998, many substantial planned changes recommended by the JET had not been incorporated into the Lockheed Martin contract, such as changes to the avionics estimated to cost $221 million. The National Defense Authorization Act for Fiscal Year 1998, enacted on November 18, 1997, imposed cost limitations of $18.688 billion on the F-22 EMD program and $43.4 billion on the production program. The limitation on production cost did not specify a quantity of aircraft to be procured. The act instructed the Secretary of the Air Force to adjust the cost limitations for (1) the amounts of increases or decreases in costs attributable to economic inflation after September 30, 1997, and (2) the amounts of increases or decreases in costs attributable to compliance with changes in federal, state, or local laws enacted after September 30, 1997. Conferees for the Department of Defense Appropriations Act, 1998, enacted October 8, 1997, provided direction to the Secretary of the Air Force regarding out-of-production parts on the F-22 program. Because it is not economical for some component manufacturers to keep production lines open to produce old technology parts with low demand, they have discontinued making parts, some of which are used on the F-22 EMD aircraft, and will discontinue making others. To minimize cost and schedule impacts on the F-22 EMD and production programs, the Air Force plans to redesign these out-of-production parts or buy sufficient quantities of them for the first five lots of production aircraft. The appropriations conferees directed the Secretary of the Air Force to fund the cost of redesigning out-of-production parts from the Research, Development, Test and Evaluation appropriation. The effect of following that direction would be to add that effort, expected by the Air Force to cost $353 million, to the EMD program. In January 1998, the Air Force notified the Congress that it increased the EMD cost limitation by $353 million, to respond to direction from the conferees, and decreased the production cost limitation by the same amount. As adjusted, the EMD cost limitation increased to $19.041 billion. In addition, the Air Force plans to adjust the cost limitation downward by $102 million to $18.939 billion, to recognize revisions to inflation assumptions by the Office of the Secretary of Defense. In the fiscal year 1999 President’s budget, the Air Force’s estimate to complete the EMD program was $18.884 billion. The estimated cost to complete EMD includes the negotiated prices of the major prime contracts, estimated costs of significant planned contract modifications, other government costs, and a margin to accommodate future cost growth. Although contractor reports through December 1997 project that the efforts on contract are expected to be completed within the negotiated contract prices, manufacturing problems with the wings and the aft fuselage could change those projections for Lockheed Martin. The Air Force contracts with Lockheed Martin and Pratt & Whitney, after being restructured, had negotiated prices of $16.003 billion. The Air Force, as of December 1997, planned to add modifications to the contracts totaling about $1.546 billion. The Air Force’s estimated costs for F-22 EMD are shown in table 1. Air Force officials provided us a list of planned and budgeted modifications that will increase the contract prices. The list is consistent with the JET findings. Modifications planned relate to efforts directed by the conferees on the Department of Defense Appropriations Act, 1998, to redesign out-of-production parts ($353 million); award fees to be paid to the contractor based on evaluations of contractor performance ($262 million); extending the time period for F-22 testing ($230 million); addition of changes (Block IV) to the avionics, including interface capability with the newly developed AIM-9X air-to-air missile ($221 million); extending the time period for keeping an active laboratory infrastructure ($158 million); efforts to provide the capability to perform air combat simulation and ground testing of avionics prior to its delivery ($65 million); provision for contractor resources to conduct initial operational test and evaluation ($60 million); efforts to test and approve the F-22 for supersonic launch of external missiles ($51 million); implementation of aircraft battle damage repair capability ($29 million); and other changes ($117 million). Since the contracts were restructured in August and September 1997, limited experience has been accumulated to indicate the extent to which contractors are completing scheduled work at the planned cost. Contractor reports reflecting experience through December 1997, however, indicate the contractors are predicting they will be able to complete efforts now covered by the contract within the negotiated costs. Lockheed Martin and Pratt & Whitney report to the Air Force monthly concerning their progress compared to contract costs and schedules. These reports define the cost and schedule variances from the contract plans. When the contracts were restructured, the contractors rebaselined their cost control systems that measure the cost and schedule progress and calculate how the actual costs and schedules vary from the goals. Prior to restructuring the contracts, the Lockheed Martin and Pratt & Whitney reports indicated unfavorable variances at completion of EMD totaling about $1.2 billion. Both Pratt & Whitney and Lockheed Martin reports showed variances of less than 1 percent from the negotiated contract cost and planned schedule through December 1997. The most significant variance identified in the contractor reports was about a $54 million unfavorable schedule variance for Lockheed Martin. The contractors’ reports showed that the negotiated costs include about $194 million for management reserves. Management reserves are amounts set aside to react to cost increases due to unplanned efforts or cost growth in planned efforts. Lockheed Martin and Pratt & Whitney December 1997 reports indicate they plan to complete the contract efforts within the negotiated costs. However, the impact of delays in the delivery of wing and aft fuselage assemblies and the flight test program have not been reflected in those reports. Lockheed Martin has advised the Air Force that it can execute the revised schedule caused by the late deliveries at no increased cost to the EMD contract. At the time of our review, the Air Force was assessing the impact of these delays and whether it agrees that the changes can be accomplished with no cost increase to the EMD contract. In January 1998, the F-22 program was not meeting its schedule goals. The first flight of an F-22 did not occur on time and resumption of its flight test program will be delayed by at least 2 to 3 weeks to correct a problem discovered in the horizontal tail of the aircraft. Also, the late delivery of aft fuselage assemblies and wing assemblies is expected to cause delays in delivery of other EMD aircraft. These problems will also delay the progress of the flight test program. The Air Force has revised its schedule to reflect the late first flight. However, it had not determined how the late deliveries of aft fuselage assemblies and wing assemblies will impact the overall F-22 EMD schedule. The Air Force planned to complete its evaluation of the impact on the schedule by the end of February 1998. Because of a number of technical problems with the aircraft, the first flight of the first F-22 EMD aircraft was delayed over 3 months, until September 7, 1997. According to the Air Force, the problems were not caused by the design of the aircraft but involved a fuel tank leak, failure of an auxiliary power unit resulting from faulty installation, a software defect, incorrect installation of the electrical connector to a fuel tank probe, and foreign object damage from debris being ingested into an engine. After making two flights, the aircraft flight test program was suspended to accomplish planned ground tests and minor structural additions to the airframe. Resumption of the flight test program, planned for March 1998, is expected to be delayed until at least April 17, 1998, because materials in the horizontal tail of the aircraft became disbonded, or separated. Air Force officials said a solution to this problem has been identified and it will not impact other EMD aircraft schedules. The flight test schedule was updated in May 1997 based on the review of the program by the JET, with first flight planned to occur in late May 1997. However, because first flight did not occur as scheduled, the beginning of the flight test program was delayed. Flight tests are also expected to be delayed because of problems with manufacturing wings and aft fuselages and expected late delivery of the third through sixth EMD flight test aircraft. Because of the delay in first flight and expected delays in delivery of several later EMD aircraft, a number of flight test hours planned for fiscal years 1998 and 1999 have been deferred until later in the test program. About 55 percent (120 of 217 hours) of the flight test hours planned for fiscal year 1998 and about 11 percent (51 of 449 hours) of the flight test hours planned for fiscal year 1999 have been deferred until later in the test program. Although test hours planned for the early stages of the flight test program are now planned to be accumulated more slowly, Air Force officials said the total number of flight test hours planned, the number of flight test months planned, and the completion date for the F-22 EMD program remain about the same. Wing deliveries are behind schedule because of problems with the development and manufacturing of large titanium wing castings, the foundation upon which the wing is built. As of January 1998, the contractor and the Air Force were still working to resolve the casting problem. The wings for the next four flight test aircraft and the two ground test articles are expected to be delivered about 2 weeks to over 4 months late to Lockheed Martin. Delivery of the F-22 aft fuselage—the rear aircraft body section—is expected to be late for the next four flight test aircraft and the two ground test articles because of late parts deliveries and difficulties with the welding process caused by tight tolerances when fitting the many pieces of the fuselage together. An Air Force and contractor team has been formed to evaluate potential cost, schedule, testing, and production impacts associated with this problem. This team plans to complete its assessment by the end of February 1998. As a result of the late deliveries of the wings and aft fuselages, the first flights of the third through the sixth EMD aircraft are expected to be from about 2 weeks to over 5 months late. Air Force officials said first flight of the second EMD aircraft is expected to occur on schedule because the time available between production of the first and second EMD aircraft is expected to be sufficient to allow the manufacturing problems to be corrected. Since there was significantly less time scheduled between the second EMD aircraft and subsequent EMD aircraft, first flight of later EMD aircraft will be delayed. Table 2 compares the May 1997 scheduled first flights to the expected dates of first flights as of January 1998. The Air Force estimates that the F-22 will meet or exceed the goals for the major performance parameters. These include 10 parameters for which the Air Force reports regularly to DOD, and two additional performance features GAO reviewed that relate to other critical characteristics of the F-22 aircraft. The Air Force estimates how performance is expected to compare to specific goals for each parameter by estimating and summarizing the performance of relevant subparameters. The estimates are engineering judgments based on computer and other models, tests of some components in flying test beds, ground tests, analyses, and, to a limited extent, flight tests. The goal for each parameter is based on the EMD contract specifications. Goals for the many subparameters (about 160) are established to ensure that the goal for each parameter can be met. Although the Air Force has not included them in the 10 parameters for regular reporting, we identified and reviewed two additional features—situational awareness and low observability—that are an integral part of the F-22 being able to operate as intended. The F-22 sensors, advanced aircraft electronics, and cockpit display screens are required to provide the pilot improved situational awareness of potential enemy threats and targets. This increased awareness is to improve pilot response time to the threats, thus increasing the lethality and survivability of the aircraft. The aircraft’s low observable or “stealthy” features allow it to evade detection by enemy aircraft and surface-to-air missiles. We believe the situational awareness and low observability features are critical to the success of the F-22 program and, therefore, we reviewed them and are reporting on the Air Force’s progress in achieving them along with the 10 parameters the Air Force established. The Air Force’s 10 parameters and the 2 additional features we identified and reviewed are described in appendix I. As of January 1998, the Air Force estimated that, at the end of the EMD program, the F-22’s performance will meet or exceed the goals for all 10 established parameters. Table 3 shows the goal (contract specification) for each parameter; the estimated performance achieved for each parameter based on computer models, analyses, or testing; and the Air Force’s current estimate of the performance each parameter is expected to achieve by the end of EMD. Most of the goals and related performance information are classified and are therefore shown as percentages instead of actual numbers. To interpret the table, it is constructed so that estimated performance greater than the goal is better than the goal, except for airlift support where using fewer assets is better. Table 3 also shows the two additional features that we included (situational awareness and aircraft low observability) because of their importance to the success of the F-22 program. We evaluated the basis for the Air Force’s current performance estimates by reviewing and analyzing performance information and estimates for subparameters that are the components of each parameter. We reviewed selected analyses, test reports, and plans the Air Force used to formulate its estimated performance achieved to date and projected estimates for the end of EMD. As of January 1998, estimated performance concerning two subparameters, aircraft weight and fuel usage, was not expected to achieve goals established for those subparameters. However, the Air Force’s analysis indicates that failure to achieve those goals will not cause the associated parameters to fail to meet their established goals. For example, aircraft empty weight is a subparameter that affects the supercruise, acceleration, maneuverability, and combat radius performance parameters. Although the aircraft’s empty weight is currently expected to be 2 percent higher than the established goal for that subparameter, Air Force analyses indicate that the increased weight is not significant enough to cause the estimates for the affected parameters to not meet their goals. A more extensive discussion of our analysis and a chart listing the major performance subparameters are included in appendix II. The Air Force’s estimate to complete F-22 EMD is $18.884 billion, $55 million less than the EMD cost limitation that will be adjusted to $18.939 billion. However, issues have emerged concerning production and delivery of wings and fuselages for the EMD aircraft, and test schedules have consequently been delayed. The Air Force is further assessing the impact of these issues on EMD cost, the schedule upon which test data is produced, and the schedule upon which the EMD program is to be completed. The Air Force is estimating that the F-22 will meet or exceed its performance goals. However, less flight test data have been accumulated through January 1998 than were expected because the flight test program was delayed and flight tests have been suspended to accomplish planned ground tests and minor structural additions to the test aircraft airframe. Delayed tests reduce the amount of actual F-22 performance information that will be available to support Air Force plans to begin production in fiscal year 1999. In commenting on a draft of this report, DOD generally concurred with it and advised us the Air Force has notified the Congress about changing the EMD and production cost limitations to recognize direction from the conferees on the fiscal year 1998 Defense Appropriations Act. As a result of this additional information, we have removed a matter for congressional consideration that had been included in the draft report. DOD’s comments are included in appendix III to this report. We performed our review between July 1997 and February 1998 in accordance with generally accepted government auditing standards. A description of our objectives, scope, and methodology is included in appendix II. We are sending copies of this report to the Secretaries of Defense and the Air Force; the Director, Office of Management and Budget; and other interested parties. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Supercruise means the aircraft can sustain supersonic or mach speed without using its afterburners. Supercruise saves fuel and helps reduce the aircraft’s infrared signature by not using afterburners that produce a high infrared signature. A reduced infrared signature, in turn, helps make the F-22 low observable and harder for enemy aircraft and missiles to detect. The measurement used for supercruise is the highest mach obtainable in a stable, level flight at 40,000 feet altitude. The Air Force estimated the F-22 will exceed the supercruise goal by about 14 percent. This estimate was determined by analysis of computer models using the latest data available on aspects such as the engines’ thrust and fuel flow characteristics. Propulsion flight testing is scheduled to begin in the first quarter of 1998 and end in the second quarter of 2000. Acceleration is a key parameter because the F-22 must be able to outrun enemy aircraft and exit an area after it employs air-to-air or air-to-ground munitions. The acceleration parameter refers to the amount of time it takes the aircraft to go from 0.8 mach to 1.5 mach at 30,000 feet altitude. The Air Force estimated that the F-22 will be faster than the acceleration goal. This estimate was determined by analysis of computer models and ground test data using the latest data available on the major subparameters affecting acceleration. Propulsion flight testing is scheduled to begin in the first quarter of 1998 and end in the second quarter of 2000 and flight performance testing is scheduled to begin in the fourth quarter of 1998 and end in the third quarter of 2001. The maneuverability parameter is a measurement of the maximum force the aircraft can generate during a turn at 0.9 mach at 30,000 feet altitude without losing speed or altitude. Many additional measures that relate to the maneuverability of an aircraft exist, but the Air Force has determined that this measurement is the most appropriate to demonstrate the general F-22 maneuverability at key flight conditions. The Air Force estimated the F-22 will meet its maneuverability goal. The Air Force estimate was determined by analysis of computer models using the latest data available on the major subparameters affecting maneuverability. Flight performance testing is scheduled to begin in the fourth quarter of 1998 and end in the third quarter of 2001. This parameter measures the number of C-141B transport aircraft equivalents required to deploy and maintain a squadron of 24 F-22 aircraft for 30 days without resupply. The goal is to be able to provide this support with eight C-141 equivalents, thereby reducing the assets needed to deploy and the cost of deployment. The Air Force estimated it will require less than eight C-141 equivalents to transport a squadron of 24 F-22s. This estimate was based on a recent study. A mobility demonstration to verify the estimate, which cannot be done until a full squadron of 24 F-22 aircraft is activated, is scheduled for 2004 upon delivery of the 24th production aircraft. A squadron of 24 F-15s requires 19 C-141 equivalents. Sortie generation rate is defined as the average number of sorties or missions flown per aircraft per day for the first 6 days of a potential conflict. This parameter measures the degree to which the F-22 will be available during the first few days of a potential conflict to achieve and maintain air superiority. The Air Force estimated the F-22 will exceed the sortie generation rate goal. This estimate was based on the results of a 6-day surge analysis done on a computer model using many statistics such as maintenance characteristics, support equipment and resource availability, and aircraft maintenance policy. F-22 maintainability demonstrations are scheduled to be accomplished by 2002 to verify the sortie generation rate estimates. The radar cross section (RCS) parameter essentially refers to how large the F-22 should appear to enemy radar. The smaller an aircraft’s RCS, the harder it is for enemy radar to detect and track. A small RCS, along with several other factors, contributes to an aircraft’s low observability or “stealthy” nature. This particular parameter is called front sector RCS, which means it is the RCS when the F-22 is viewed from the front by enemy radar. While there are over 200 F-22 RCS measurement points, the Air Force considers the front sector RCS the most important measure of the aircraft’s ability to avoid detection by an enemy. The Air Force estimated the F-22’s front sector RCS will be smaller or better than its goal. Air Force RCS estimates were based on component models that predict the RCS of major components, such as engine inlets and wings, and then use this data to predict the RCS of an entire aircraft. There are 27 major subparameters of this RCS parameter. RCS design validation and specification compliance are also being conducted with a full-scale F-22 mounted on a pole enabling testers to take RCS measurements. This testing will continue into 1999. In-flight RCS measurements will begin in 1999 and continue into 2002. Mean time between maintenance is a measure of aircraft reliability defined as the total number of aircraft flight hours divided by the total number of aircraft maintenance actions in the same period. The F-22 goal is 3 flight hours between maintenance actions by the time the F-22 reaches system maturity. The Air Force estimated that by the time the F-22 reaches system maturity (100,000 flight hours, or about year 2008), the F-22 will only require maintenance every 3.1 flight hours. A reliability computer model was used to develop this estimate by using factors like the design of systems on the aircraft and scheduled maintenance activities. Throughout development and operational flight testing, maintenance data is to be collected from the 500th through the 5,000th hour of flight testing to update the maintenance estimate. Data will continue to be collected about operational usage of the aircraft through system maturity to verify requirements. The payload parameter is the number of air-to-air missiles, medium and short range, the F-22 is to carry when conducting an air superiority mission and not attacking enemy ground targets. Payload is a key parameter because the F-22 is designed to carry missiles in its internal weapons bay, not externally. Carrying weapons externally increases an aircraft’s radar cross section and can allow easier detection by enemy radar. The Air Force estimated that the F-22 will meet the payload goal of carrying six AIM-120C medium-range missiles and two AIM-9X short-range missiles internally. Weapons bay testing is scheduled for mid-2000 to determine how well the missiles can exit the weapons bay when launched. The combat radius parameter refers to the nautical miles the F-22 is required to fly to achieve its primary mission of air superiority. This mission requires the F-22 to be able to fly a certain distance subsonically and a certain distance supersonically to achieve the mission. The Air Force estimated the F-22 will exceed its combat radius goal by 23 percent. Unfavorable estimates for two of three major subparameters—fuel usage and aircraft weight—are not unfavorable enough to prevent the F-22 from meeting its combat radius goal. Performance flight testing to help compute the aircraft’s combat radius performance, as well as other aerodynamic capabilities, is scheduled to begin in late 1998 and end the third quarter of 2001. The radar detection range parameter refers to the number of nautical miles at which the F-22 radar should be able to detect enemy threats or potential targets. The radar needs to be able to detect enemy targets with small radar signatures at sufficient distance to ensure the F-22 can engage the enemy first. The Air Force estimated that the F-22 radar will exceed the established radar goal by 17 percent. This estimate was based primarily on digital simulations and models used to develop confidence in the tactical functions of radar search and detection capabilities. Radar detection performance is scheduled to be verified against the simulations and models in an aviation electronics laboratory from the first quarter of 1998 to the third quarter of 1999. Actual flight testing of the radar in F-22 EMD aircraft is scheduled to begin in the third quarter of 1999 and continue to at least the second quarter of 2001. The situational awareness parameter refers to the extent the F-22 sensors and aviation electronics systems are able to make pilots aware of the situation around them. The planned integration of the many aviation electronics systems and sensors is meant to (1) minimize pilot workload of managing and interpreting sensors and (2) provide previously unmatched awareness of potential F-22 threats and targets. Air Force data indicated the F-22 will meet the pilot situational awareness goal based on its performance estimates of the major aviation electronics subparameters affecting situational awareness including the radar system, the electronic warfare systems, and the communications, navigation, and identification systems. Sixty-three major aviation electronics functions contribute to these three major subparameters. Development of the integrated aviation electronics, however, is in the early stages. For example, the Air Force provided us information on 10 major milestones that must be completed before integrated avionics development will be complete and the first milestone is not scheduled until October 1998. The last of these milestones is scheduled for November 2001. The low observability parameter refers to the aircraft’s “stealthy” nature or ability to evade detection by enemy radar long enough for it to detect the enemy and shoot first. Five features of an aircraft contribute to its degree of low observability or “stealthiness” including radar cross section, infrared signature, electromagnetic signature, visual signature, and acoustic signature. However, the F-22 does not have a requirement for an acoustic signature. Air Force information indicated it expects the F-22 to meet the performance goals established for the various aspects of low observability. Specification compliance on the most critical feature, radar cross section, is being checked with a full-scale F-22 mounted on a pole and will continue into 1999. In-flight radar cross section measurements will begin in 1999 and continue into 2002. Flight testing to help predict the F-22 infrared signature, another critical aspect of low observability, is scheduled for the third quarter of 1999. Our objective was to determine whether the F-22 EMD program can be completed within the cost limitation established by the Congress. We also reviewed the extent to which the F-22 EMD program was achieving cost, schedule, and performance goals, including major modifications. To determine whether the program was expected to meet the cost limitation, we obtained the current cost estimate, which served as a basis for the fiscal year 1999 budget request. We compared that estimate to the estimate supporting the cost limitation and discussed the reasons for the differences with F-22 financial management officials. We made several analyses, including comparing the estimated cost at completion for the prime contracts with planned amounts, and evaluating cost variances identified in the earned value management system. We obtained and reviewed information on the cost and schedule goals for the F-22 EMD program established by the JET during its review of the F-22 program. Since the JET did not revise F-22 performance requirements, we defined the performance goals as those performance requirements on contract at the time the JET reviewed the program. To assist us in determining the goals, we also reviewed overall program documents such as Selected Acquisition Reports, Monthly Acquisition Reports, Defense Acquisition Executive Summaries, Program Management Reviews, contracts with the prime contractors, Test and Evaluation Master Plans, and Program Management Directives. To determine whether the program was expected to meet schedule goals, we obtained the current approved program schedule, which incorporated the latest restructured plans for the F-22 EMD program. We discussed the schedule and potential changes to it with F-22 program officials. We also reviewed the planned flight test schedule and the changes to it as a result of the late first flight of the first EMD aircraft. In addition, we discussed technical problems in assembling subsequent EMD aircraft. We evaluated schedule variances in the earned value management system and compared planned milestone accomplishment dates with actual dates of accomplishments. We also assessed the impact the late first flight may have on the overall EMD schedule. To determine whether the program was expected to meet the F-22 performance goals, we analyzed information on the performance of key performance parameters and of those important subparameters that are measured. We compared the Air Force’s current estimate for these parameters to previous estimates to determine whether estimated performance had changed. We determined whether the current estimates were based on actual tests, engineering models, or engineering judgment. We discussed each of the key performance parameters with program officials and determined the basis for the current estimates. We also reviewed past program documentation to determine the basis for the required performance and discussed the reasons for differences between required performance and estimated performance. To evaluate the bases for the Air Force’s current performance estimates, we collected information on the goals established for the major performance subparameters that are critical components of the performance parameters. We collected and analyzed information on Air Force estimates, as of January 1998, toward meeting the goals of these subparameters to determine whether the Air Force estimates seemed reasonable. For example, the major subparameters of the airlift support parameter are the number of aircraft support equipment items, the airlift loads necessary to transport aircraft support equipment items, and the maintenance manpower required for a squadron of F-22s. Each of these subparameters has a performance goal just as the overall parameter has a performance goal. The performance parameters and their associated major subparameters are shown in table II.1. Number of support equipment items Airlift loads required to deploy support equipment Number of support equipment items (27 individual subparameters) (No subparameters) To determine the status of contract modifications expected to have a significant effect on F-22 cost or performance, we reviewed the Air Force’s process for receiving, reviewing, approving, and monitoring engineering change proposals. We obtained a list of the proposals received and determined which had been approved. For those proposals that were approved, we reviewed the related documentation to determine their status and their estimated impact on aircraft performance and on the cost of the EMD program. To be able to certify whether we had access to sufficient data to make informed judgments on the matters covered in our report, we maintained a log of our requests and the Air Force responses. We numbered and tracked each request we made for documents and for meetings to determine how long it took to receive responses from the Air Force. As a result of this tracking, we were able to certify that we had access to sufficient information to make informed judgements on the cost, schedule, and performance matters covered in this report. The following is GAO’s comment on the Department of Defense’s letter dated February 12, 1998. 1. Our draft report was submitted to DOD for comment at about the same time as the Air Force notified the Congress that the EMD cost limitation was being increased and the production cost limitation was being decreased to recognize direction from the conferees on the Department of Defense Appropriations Act, 1998. Although direction from the conferees is technically not a change in federal, state, or local law defined as a criteria for changing the cost limitations, we believe the intent of the conferees’ direction is clear and that the types of adjustments the Air Force made to the cost limitations are appropriate. Tactical Aircraft: Restructuring of the Air Force F-22 Fighter Program (GAO/NSIAD-97-156, June 4, 1997). Defense Aircraft Investments: Major Program Commitments Based on Optimistic Budget Projections (GAO/T-NSIAD-97-103, Mar. 5, 1997). F-22 Restructuring (GAO/NSIAD-97-100R, Feb. 28, 1997). Tactical Aircraft: Concurrency in Development and Production of F-22 Aircraft Should Be Reduced (GAO/NSIAD-95-59, Apr. 19, 1995). Air Force F-22 Embedded Computers (GAO/AIMD-94-177R, Sept. 20, 1994). Tactical Aircraft: F-15 Replacement Issues (GAO/T-NSIAD-94-176, May 5, 1994). Tactical Aircraft: F-15 Replacement Is Premature as Currently Planned (GAO/NSIAD-94-118, Mar. 25, 1994). Aircraft Development: Reasons for Recent Cost Growth in the Advanced Tactical Fighter Program (GAO/NSIAD-91-138, Feb. 1, 1991). Aircraft Development: Navy’s Participation in Air Force’s Advanced Tactical Fighter Program (GAO/NSIAD-90-54, Mar. 7, 1990). Aircraft Development: The Advanced Tactical Fighter’s Costs, Schedule, and Performance Goals (GAO/NSIAD-88-76, Jan. 13, 1988). Aircraft Procurement: Status and Cost of Air Force Fighter Procurement (GAO/NSIAD-87-121, Apr. 14, 1987). DOD Acquisition: Case Study of the Air Force Advanced Tactical Fighter Program (GAO/NSIAD-86-45S-12, Aug. 25, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Air Force's F-22 engineering and manufacturing development (EMD) program. GAO noted that: (1) the Air Force's estimate to complete F-22 EMD is $18.884 billion, $55 million less than the EMD cost limitation that will be adjusted to $18.939 billion; (2) however, the F-22 EMD program is not meeting schedule goals established in response to the Joint Estimating Team review; (3) the first flight of the F-22 was about 3 months late, issues have emerged concerning production and delivery of wings and fuselages for the EMD aircraft, and test schedules have consequently been delayed; (4) Lockheed Martin has indicated that negotiated costs should not be exceeded because of these issues; (5) the Air Force, however, is further assessing the impact of these issues on EMD cost, the schedule upon which test data is produced, and the schedule upon which the EMD program is to be completed; (6) the Air Force expects to complete this assessment at the end of February 1998; (7) the Air Force is estimating that the F-22 will meet or exceed its performance goals; (8) however, less flight test data have been accumulated through January 1998 than were expected because the beginning of the flight test program was delayed from May 1997 to September 1997 and flight tests have been suspended to accomplish planned ground tests and minor structural additions to the airframe; (9) flight testing will not resume until April 1998; and (10) delayed tests reduce the amount of actual F-22 performance information that will be available to support Air Force plans to begin production in fiscal year 1999.
Before discussing these issues in detail, it is important to recognize that a large amount of federal paperwork is necessary and serves a useful purpose. Information collection is one way that agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the amount of taxes owed. The Bureau of the Census collects information that was used to reapportion congressional representation and is being used for a myriad of other purposes. The events of September 11 have demonstrated the importance of accurate, timely information. On several occasions, we have recommended that agencies collect certain data to improve operations and evaluate their effectiveness. However, under the PRA, federal agencies are required to minimize the paperwork burden they impose. The original PRA of 1980 established the Office of Information and Regulatory Affairs (OIRA) within OMB to provide central agency leadership and oversight of governmentwide efforts to reduce unnecessary paperwork and improve the management of information resources. Currently, the act requires OIRA to develop and maintain a governmentwide strategic information resources management (IRM) plan, and in recent years OIRA has designated the Chief Information Officers Council’s strategic plan as the principal means of meeting this requirement. In February of this year we issued a report concluding that this document does not constitute an effective and comprehensive strategic vision. Specifically, we said that the goals in the plan were not linked to expected improvements in agency and program performance, and did not address such issues as records management or the collection and control of paperwork. Other documents that OIRA provided also did not, either individually or collectively, meet the PRA’s requirement for a governmentwide strategic IRM plan. However, the president’s budget for 2003, released in February of this year, contains many (but not all) of the required elements, and therefore represents credible progress toward developing a governmentwide IRM plan. OIRA also has overall responsibility for determining whether agencies’ proposals for collecting information comply with the act. Agencies must receive OIRA approval for each information collection request before it is implemented. Section 3514(a) of the PRA requires OIRA to keep Congress “fully and currently informed” of the major activities under the act, and must submit a report to Congress at least annually on those activities. The report must include, among other things, a list of all PRA violations and a list of any increases in burden. To satisfy this reporting requirement, OIRA develops an Information Collection Budget (ICB) by gathering data from executive branch agencies. In October 2001, the OMB director sent a bulletin to the heads of executive departments and agencies requesting information to be used in preparation for the fiscal year 2002 ICB (reporting on actions during fiscal year 2001). However, that bulletin differed from its predecessors in several respects. For example, the agencies that the director asked to provide information did not include 12 noncabinet-level agencies and organizations that had previously provided ICB information (e.g., the Federal Communications Commission, the Federal Trade Commission, the Social Security Administration, and the Securities and Exchange Commission). The only independent agency asked to provide information to OMB was the Environmental Protection Agency (EPA). Also, the covered agencies were asked to provide less information than before. For example, the agencies were asked to provide detailed information only for “significant” burden reductions and increases, not on each change in burden estimates. The OMB director said in the October 2001 bulletin that the amount of information requested had been significantly reduced “in the interest of reducing burden on the agencies.” OIRA published its ICB for fiscal year 2001 (showing changes in agencies’ burden-hour estimates during fiscal year 2000) in August 2001. OIRA officials told us that they did not expect to publish the ICB for fiscal year 2002 until today’s hearing. Therefore, we obtained unpublished data from OIRA to identify changes in governmentwide and agency-specific “burden- hour” estimates during fiscal year 2001. However, because the OIRA data does not include burden-hour estimates from any independent agencies other than EPA, we also obtained data from the Regulatory Information Service Center (RISC) for the independent agencies that OIRA did not cover. We then compared both the OIRA and the RISC data to agencies’ burden-hour estimates in previous ICBs to determine changes in those estimates over time. “Burden hours” has been the principal unit of measure of paperwork burden for more than 50 years and has been accepted by agencies and the public because it is a clear, easy-to-understand concept. However, it is important to recognize that these estimates have limitations. Estimating the amount of time it will take for an individual to collect and provide information or how many individuals an information collection will affect is not a simple matter. Therefore, the degree to which agency burden-hour estimates reflect real burden is unclear. Nevertheless, these are the best indicators of paperwork burden available, and we believe they can be useful as long as their limitations are kept in mind. Federal agencies estimated that their information collections imposed about 7 billion burden hours on the public at the end of fiscal year 1995— just before the PRA of 1995 took effect. The PRA made several changes in federal paperwork reduction requirements. One such change required OIRA to set a goal of at least a 10-percent reduction in the governmentwide burden-hour estimate for each of fiscal years 1996 and 1997, a 5 percent governmentwide burden reduction goal in each of the next 4 fiscal years, and annual agency goals that reduce burden to the “maximum practicable opportunity.” Therefore, if federal agencies had been able to meet each of these goals, the 7-billion burden-hour estimate in 1995 would have fallen to about 4.6 billion hours by September 30, 2001. However, as figure 1 shows, this anticipated reduction in paperwork burden did not occur. In fact, the data we obtained from OIRA show that the governmentwide burden-hour estimate increased by about 9 percent during this period and stood at more than 7.6 billion hours as of September 30, 2001. During fiscal year 2001 alone, the governmentwide estimate increased by nearly 290 million hours—the largest 1-year increase since the PRA was amended and recodified in 1995. It is also important to understand how the most recent estimate of federal paperwork is allocated by the purpose of the collections, by type of respondent, and by agency. As figure 2 shows, RISC data indicates that almost 95 percent of the more than 7.6 billion hours of estimated paperwork burden in place governmentwide as of September 30, 2001, was being collected primarily for the purpose of regulatory compliance. Less than 5 percent was being collected as part of applications for benefits, and about 1 percent was collected for other purposes. Figure 3 shows that almost two-thirds of the governmentwide burden estimate was primarily directed toward businesses. Slightly less than one- third of the burden was primarily on individuals, and less than 3 percent was on state, local, or tribal governments. As figure 4 shows, as of September 30, 2001, IRS accounted for about 83 percent of the governmentwide burden-hour estimate (up from about 75 percent in September 1995). Other agencies with burden-hour estimates of 100 million hours or more as of that date were the departments of Labor (DOL) and Health and Human Services (HHS), EPA, and the Securities and Exchange Commission (SEC). Because IRS constitutes such a significant portion of the governmentwide burden-hour estimate, changes in IRS’ estimate can have a significant—and even determinative—effect on the governmentwide estimate. Department of Labor (DOL) Department of Health and Human Services (HHS) As table 1 shows, some agencies’ paperwork burden estimates decreased sharply during fiscal year 2001, most notably those of the departments of Commerce and Transportation (DOT). However, other agencies (e.g., the department of the Treasury and the SEC) indicated that their paperwork burdens had increased. The reasons behind some of these changes are clear. For example, the sharp decrease in the Department of Commerce’s estimate (from more than 38 million hours to about 10 million hours) appears to be almost entirely attributable to the completion of the decennial census. The reasons for other changes are less immediately apparent. As I will discuss later, the sharp decrease in the DOT estimate was caused by the expiration (and subsequent PRA violation) of a single information collection. However, changes in agencies’ bottom-line burden-hour estimates do not tell the whole story, and can be misleading. It is also important to understand how the agencies accomplished these results. OIRA classifies modifications in agencies’ burden-hour estimates as either “program changes” or “adjustments.” Program changes are the result of deliberate federal government action (e.g., the addition or deletion of questions on a form), and can occur as a result of new statutory requirements, agency- initiated actions, or through the expiration or reinstatement of OIRA- approved collections. Adjustments are not the result of deliberate federal government action, but rather are caused by factors such as changes in the population responding to a requirement or agency reestimates of the burden associated with a collection of information. For example, if the economy declines and more people complete applications for food stamps, the resultant increase in the Department of Agriculture’s (USDA) paperwork estimate is considered an adjustment because it is not the result of deliberate federal action. In recent ICBs, OIRA has indicated whether fluctuations in agencies’ burden-hour estimates were caused by program changes or adjustments. The fiscal year 2001 burden estimates that we obtained from OIRA and RISC in preparation for this hearing also contained those two categories and are presented in table 1. Analysis of those data helps explain what drove the changes in agencies’ bottom-line burden-hour estimates. For example, almost all of the marked decline in the Department of State’s estimate was due to adjustments. Also, the more than 40 million burden- hour increase in the SEC estimate was primarily driven by adjustments. Therefore, the Department of State cannot claim credit for having proactively reduced the paperwork burden that it imposes on the public, and the SEC may not be responsible for the increase that it reported. In contrast, table 1 shows that the more than 37 million burden-hour decrease in DOT’s bottom-line paperwork estimate was entirely driven by a more than 40 million-hour program change reduction. However, the table does not indicate what specific type of action precipitated this or any other program change—new statutes, agency actions, or reinstated/expired collections. Although DOT’s ICB submission did not provide further clarification, OIRA staff told us that the reduction was caused by the expiration (and subsequent PRA violation) of the agency’s “hours of service” information collection. Last year, the data that OIRA obtained from the agencies allowed us to separate the program changes in our table into the new statutes, agency actions, and reinstate/expired subcategories. However, as I noted previously, OIRA did not request such detailed data from the agencies for the fiscal year 2002 ICB except for certain “significant” collections. Therefore, our table this year does not break down the program changes into these subcategories. For the past 2 years, OIRA indicated in separate columns of the ICB summary table whether the program changes made during each fiscal year were due to agency action or new statutes. OIRA officials told us that the ICB that the agency was releasing today would present both statutory and agency action-based program changes during fiscal year 2001 in one column. As a result, they said, Congress and the public could calculate the amount of program change that was attributable to violations or reinstatements by subtracting the amount of the new statutes/agency actions from the total program changes. We believe that this approach has at least two problems. First, combining the statutory and agency-initiated program changes into one column prevents Congress and the public from knowing why the agencies’ paperwork estimates changed. Presentation of this information in separate columns—as has been done in previous years—allows the public to know whether Congress or the agencies are responsible for increases or decreases in an agency’s paperwork estimate. Second, not providing this information and requiring Congress and the public to calculate the amount of change in burden caused by violations or the reinstatement of violations seems to run counter to the administrator’s stated goal of increasing the transparency of OIRA’s operations. OIRA has such data, for it listed expirations and reinstatements were separately listed in the raw data provided to us in preparation for this hearing, and OIRA used the data to calculate the amount of the program changes that were due to agency actions or new statutes. As I mentioned previously, the PRA requires OIRA to keep Congress and congressional committees “fully and currently informed” of the major activities under the act. It specifically says that OIRA’s annual report must identify “any increase in the collection of information burden.” We do not believe that the information OIRA is releasing today fully satisfies this PRA requirement in that it includes only some of the agencies with estimated burden-hour increases and substantial information collection requirements. In fact, some of the independent agencies that OIRA indicated that it planned to exclude from this year’s ICB (e.g., the SEC, the Federal Trade Commission, and the Federal Communication Commission) had higher estimated burden than some of the cabinet departments for which information was provided (e.g., the departments of Energy, Interior, and Veterans Affairs). Also, to facilitate transparency and increase Congress’ and the public’s understanding of paperwork burden, we believe that OIRA should separately identify each of the specific types of program changes in the ICB—changes due to agency action, changes due to new statutes, changes due to violations, and changes due to reinstatements. Although changes in non-IRS departments and agencies’ burden-hour estimates are notable and important, they pale in comparison to the size of the changes at IRS. The increase in the IRS burden-hour estimate during fiscal year 2001 (about 250 million burden hours) was more than six times as much as the rest of the government combined. Therefore, although all agencies must ensure that their information collections impose the least amount of burden possible, it is clear that the key to controlling federal paperwork governmentwide lies in understanding and controlling the increases at IRS. As table 1 shows, more than 80 percent of the 259 million burden-hour increase in the Department of the Treasury paperwork estimate during fiscal year 2001was attributed to program changes. IRS accounted for about 250 million (about 97 percent) of the departmental increase. In the Department of the Treasury’s ICB submission to OMB describing changes during fiscal year 2001, IRS identified a number of significant program change increases that it said were a function of the underlying statutes. For example, IRS said that it added nearly 28 million burden hours to its estimate because the FSC Repeal and Extraterritorial Income Exclusion Act of 2000 added a section to the Internal Revenue Code, resulting in a new Form 8873. However, about two-thirds of the program change increases that IRS identified in the ICB submission for fiscal year 2001 involved changes made at the initiation of the agency—not because of new statutes. For example: IRS said that 14 lines and 23 Code sections were added to Form 1065 (“U.S. Return of Partnership Income”), and accompanying schedules and instructions at the request of the agency, resulting in an estimated increase of more than 75 million burden hours. IRS said that changes made at the agency’s request to Form 1120S (“U.S. Income Tax Return for an S Corporation”) and accompanying schedules and instructions resulted in an estimated increase of more than 22 million burden hours. IRS said that changes at the request of the agency to Form 1120 (“U.S. Corporation Income Tax Return”) and related schedules and instructions resulted in a more than 7 million-hour increase in the form’s estimated burden. Because IRS attributed most of the increase in its burden-hour estimate during fiscal year 2001 to program changes, and because most of the program changes during that period were made at the agency’s initiative, IRS cannot claim (as it has in the past) that statutory changes primarily caused the increase in its burden-hour estimates. IRS also indicated in the ICB submission that it had taken a number of actions intended to reduce paperwork burden. For example, IRS said it (1) had conducted a series of focus groups consisting of taxpayers who file Schedule D to explore their preferences for presenting and reporting information to compute gains and losses and any tax due, (2) was working with a contractor to redesign Form 941, “Employer’s Quarterly Federal Tax Return,” and the accompanying instructions, and (3) was continuing its initiative to encourage taxpayers to file the simplest tax return for their tax situation. With regard to small corporations, IRS said it had proposed that corporate filers with assets of less than $250,000 be exempted from certain reporting requirements, which would—if implemented—save 39 million burden hours. In summary, the agencies’ information collection estimates for the ICB being released today indicate that federal paperwork continues to increase, and that changes initiated by IRS accounted for most of the record 1-year increase during fiscal year 2001. As we indicated in our February report on information resources management, OIRA and the agencies lack a unifying vision for how those resources will facilitate the government’s agenda. Also, the risk is increased that duplicative initiatives will be undertaken, and that opportunities for data sharing will be missed. The PRA requires that OIRA develop such a plan, one element of which must be a proposal for reducing information burdens. Also, because IRS constitutes such a significant portion of the governmentwide burden-hour estimate, another strategy to address increases in federal paperwork could be to focus OIRA’s burden-reduction efforts on that agency. Just as increases in IRS’s burden estimates have had a determinative effect on the governmentwide estimates, reduction in the IRS estimates can have an equally determinative effect. I would now like to turn to the other main topic you asked us to address— PRA violations. The PRA prohibits an agency from conducting or sponsoring a collection of information unless (1) the agency has submitted the proposed collection and other documents to OIRA, (2) OIRA has approved the proposed collection, and (3) the agency displays an OMB control number on the collection. The act also requires agencies to establish a process to ensure that each information collection is in compliance with these clearance requirements. OIRA is required to submit an annual report to Congress that includes a list of all violations. The PRA says no one can be penalized for failing to comply with a collection of information subject to the act if the collection does not display a valid OMB control number. OIRA may not approve a collection of information for more than 3 years, and there are about 7,000 approved collections at any one time. In the ICB for fiscal year 1999, OIRA identified a total of 872 violations of the PRA during fiscal year 1998. In our testimony before this Committee 3 years ago, we noted that some agencies—USDA, HHS, and the Department of Veterans Affairs (VA)—had each identified more than 100 violations.We also said that OIRA had taken little action to address those violations and suggested a number of ways that OIRA could improve its performance. For example, we said that OIRA could use its database to identify information collections for which authorizations had expired, contact the collecting agency, and determine whether the agency was continuing to collect the information. We also said that OIRA could publicly announce that the agency is out of compliance with the PRA in meetings of the Chief Information Officers Council and the President’s Management Council. During the past 2 years, the number of violations that OMB reported has declined steadily. Two years ago we testified that the number of violations had declined from 872 during fiscal year 1998 to 710 during fiscal year 1999. Last year, we testified that the number of violations had declined even further—from 710 to 487 during fiscal year 2000. Each year, a few agencies—most consistently USDA and HUD, but occasionally the Department of Justice and VA—have accounted for a disproportionate share of the violations. Each year we concluded that, although OIRA had taken several actions to address PRA violations, the OMB and the agencies responsible for the collections could do more to ensure compliance. Table 2 shows the number of violations that the covered agencies reported (and that OIRA agreed were violations) during fiscal year 2001. As noted previously, noncabinet-level agencies other than EPA were not required to report this information to OIRA in preparation for this year’s ICB, so we could not provide information for those agencies in our table. Therefore, comparison of the total number of violations during fiscal year 2001 to previous years can be done only for those agencies reporting in all relevant time frames. The cabinet departments and EPA reported 648 PRA violations during fiscal year 1999 and 423 violations during fiscal year 2000. Those agencies identified a total of 402 violations during fiscal year 2001—only slightly fewer than the year before. Therefore, the substantial decline in the number of PRA violations that has occurred in these agencies appears to have stopped. As was the case in previous years, HUD, USDA, and VA reported the most violations during fiscal year 2001—112, 67, and 64, respectively. The number of violations at USDA decreased between fiscal years 2000 and 2001 (from 96 to 67), but the numbers at HUD and VA went up (from 99 to 112, and from 40 to 64, respectively). Overall, the number of violations decreased in 8 of the 15 agencies reporting data in both years, increased in 6 agencies, and stayed the same in 1 agency. Many of the 402 violations that occurred during fiscal year 2001 were new and had been resolved by the end of the fiscal year. However, about 40 percent of the violations were listed in last year’s ICB, and many had been occurring for years. For example, as of the end of fiscal year 2001, USDA indicated that 13 of its collections had been in violation for more than 2 years, and 10 had been in violation for at least 3 years, HUD indicated that 10 of its collections had been in violation for at least 2 years, and 6 had been in violation for at least 4 years, the Department of the Interior indicated that 9 collections had been in violation for at least 2 years, and 4 had been in violation for at least 7 years, and VA indicated that 25 of its collections had been in violation for at least 2 years, and 15 had been in violation for at least 4 years. In our testimony in previous years, we provided an estimate of the monetary cost associated with certain PRA violations. To estimate that cost, we multiplied the number of burden hours associated with the violations by an OMB estimate of the “opportunity costs” associated with each hour of IRS paperwork. Although the ICBs list the information collections that were in violation during the previous year, and the dates of expiration and any reinstatement, they do not provide information on the number of burden hours associated with each of the violations. Therefore, we obtained data from OIRA on the estimated number of burden hours for 340 of the 402 information collections that were in violation of the PRA during fiscal year 2001. As in previous years, the data suggest that these PRA violations may constitute significant opportunity costs for those required to provide the related information. We estimate that the 340 violations involved about 58 million burden hours of paperwork, or about $1.6 billion in opportunity costs. A small percentage of the collections accounted for the bulk of those costs. For example, 60 of the collections involved estimated opportunity costs of at least $1 million each, for a total of more than $1.5 billion. Just three of the collections (two from USDA and one from VA) accounted for more than $1 billion in estimated opportunity costs. Many of the information collections that were in violation of the PRA were being administered for regulatory purposes, so if the respondents knew the collections were not valid they might not have completed the required forms. However, other violations involved collections in which individuals or businesses were applying for benefits such as loans or subsidies. Therefore, it is not clear whether these individuals and businesses would have refused to complete the required forms if they knew that the collections were being conducted in violation of the PRA. As I indicated earlier, OIRA has taken some steps to encourage agencies to comply with the PRA, and those steps previously appeared to have been paying off in terms of fewer reported violations overall and within particular agencies. However, particularly because the number of violations did not decline during fiscal year 2001, we believe that OIRA can do more. For example, 2 years ago OIRA added information to its Internet home page about information collections that expired in the previous month. As a result, potential respondents are able to review the list of recent expirations and inform the collecting agency, OIRA, and Congress of the need for the agency to either obtain reinstatement of OIRA approval or discontinue the collection. Although notifying the public about unauthorized information collections is a step in the right direction, OIRA’s approach places the burden of responsibility to detect unauthorized collections on the public. It is OIRA, not the public, which has the statutory responsibility to review and approve agencies’ collections of information and identify all PRA violations. Therefore, we believe that OIRA should not simply rely on the public to identify these violations. For example, OIRA desk officers could use the agency’s database to identify information collections for which authorizations had expired, contact the collecting agency, and determine whether the agency is continuing to collect the information. The desk officers could also use the database to identify information collection authorizations that are about to expire, and therefore perhaps prevent violations of the act. At a minimum, OIRA could post on its Internet home page the complete list of collections that it believes are in violation of the PRA—not just those collections that expired during the previous month and that may or may not constitute violations. OIRA officials and staff previously told us that they have no authority to do much more than publish the list of violations in the ICB and inform the agencies directly that they are out of compliance with the act. We do not agree that OIRA is as powerless as this explanation would suggest. First of all, OIRA could publish the number of violations for all of the agencies covered by the PRA in the ICB. Section 3514(a) of the act specifically requires OIRA to include in its annual report a “list of all violations,” not just the cabinet departments plus EPA. Therefore, we do not believe that the information that OIRA is releasing today fully satisfies this requirement. Also, if an agency does not respond to an OIRA notice that one of its information collections is out of compliance with the PRA, the administrator could take any number of actions to encourage compliance, including any or all of the following: Publicly announce that the agency is out of compliance with the PRA in meetings of the Chief Information Officers Council. Notify the “budget” side of OMB that the agency is collecting information in violation of the PRA and encourage the appropriate resource management office to use its influence to bring the agency into compliance.
This testimony discusses the implementation of the Paperwork Reduction Act of 1995 and information collection authorizations from the Office of Management and Budget (OMB) that either expired or were otherwise inconsistent with the act. GAO found that federal paperwork rose by 290 million burden hours during fiscal year 2001--the largest one-year increase since the act was amended and recodified in 1995. This occurred largely because the Internal Revenue Service (IRS) increased its paperwork estimate by about 250 million burden hours during the year. Most of the paperwork increase at IRS resulted from changes made by the agency--not because of new statutes. Federal agencies providing information to OMB identified more than 400 violations of the act during fiscal year 2001. Some of these violations have been going on for years, and they collectively represent substantial opportunity costs.
FSA, established by the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354, Oct. 13, 1994), provides, among other things, direct government-funded loans to farmers and ranchers who are unable to obtain financing elsewhere at reasonable rates and terms. For example, direct farm ownership loans are made for buying farm real estate and making capital improvements. Direct farm operating loans are made for purposes such as buying feed, seed, fertilizer, livestock, and farm equipment and paying family living expenses. Additionally, emergency disaster loans are made to farmers and ranchers whose operations have been substantially damaged by adverse weather or other natural disasters. When a borrower does not repay his or her loans, FSA has various tools to resolve the delinquency, such as (1) restructuring the loans, which may include reducing debt; (2) allowing a borrower who does not qualify for restructuring to make a payment for less than the amount owed, which results in FSA’s forgiving the balance; and (3) reaching a final resolution of the debt that may or may not include a payment by the borrower, which also results in debt forgiveness. The Consolidated Farm and Rural Development Act, as amended (P.L. 87-128, Aug. 8, 1961), which is referred to as the Con Act, is the basic authority for the farm loan programs. Table 1 shows that as of September 30, 1997, about 110,000 borrowers owed FSA about $9.7 billion on active direct farm loans; these figures represent 9.5 percent fewer borrowers and 14.8 percent less debt compared with those of September 1995. About 18,600 borrowers were delinquent at the end of September 1997; these borrowers owed about $2.7 billion, or 28.2 percent of the total outstanding principal. This delinquency rate is an improvement from the delinquency rates in September 1996 and 1995, which were 34.2 percent and 40.7 percent, respectively. Table 1 also shows that borrowers with larger amounts of debt had higher delinquency rates than borrowers owing smaller amounts. In 1997, for example, about 72 percent of the principal held by borrowers with loans totaling $500,000 or more was held by delinquent borrowers, whereas about 25 percent of the principal held by borrowers with loans totaling less than $500,000 was held by delinquent borrowers. Borrowers in a small number of states accounted for a disproportionate share of the total delinquent debt in FSA’s farm loan portfolio. For example, borrowers in five states owed slightly over 40 percent of the total delinquent debt at the end of fiscal year 1997. This compares with about 36 percent and 34 percent at the end of fiscal years 1996 and 1995, respectively. Appendix I provides information on the five states with the most delinquent debt in each of these 3 fiscal years. FSA incurs losses when it writes off the direct farm loans of delinquent borrowers through its debt settlement process, which essentially represents the agency’s final resolution of unpaid loans and generally occurs after loan-security property has been liquidated. In fiscal year 1997, FSA wrote off about $380 million through debt settlements, which is down from the more than $860 million written off in fiscal year 1996 and $780 million written off in fiscal year 1995. In total, about 7,600 borrowers had slightly more than $2 billion written off during this 3-year period. FSA has the following four options for resolving debts in its debt settlement process: Adjustment. A borrower agrees to make, at some time in the future, one payment, or a series of payments, that is less than the amount owed. Compromise. The debt is satisfied when a borrower makes an immediate single lump-sum payment that is less than the amount owed. Cancellation. The debt is written off without any payment made, and the borrower is released from further liability because FSA believes that the borrower has insufficient potential to make additional payments. Charge-off. The debt is written off without any payment made, and FSA ends collection activity, but the borrower is not released from liability for the amount owed. Table 2 summarizes the amount of debt written off during the fiscal year 1995-97 period through each of the four debt settlement options. As the table shows, a comparatively small number of borrowers who had a large amount of debt written off accounted for a substantial portion of the write-offs. In 1997, for example, 182 borrowers, or about 9 percent of those whose debts were written off, had $500,000 or more forgiven; these borrowers received about 48 percent of the total debt forgiveness. Borrowers in a small number of states accounted for a disproportionate share of the write-offs. Specifically, borrowers in the five states with the most write-offs each year received about 48 percent of the total write-offs during this 3-year period. Appendix II provides information on the five states with the most write-offs during each of these 3 fiscal years. Three statutory provisions were enacted in the mid-1990s that provide FSA with discretionary authority to contract for loan-servicing assistance. These provisions authorize, but do not require, contracting with (1) private attorneys to obtain legal assistance in resolving delinquent farm loan accounts, (2) private lenders to obtain assistance in servicing farm loan borrowers’ accounts, and (3) private collection agencies to obtain assistance in collecting delinquent farm loans. FSA has made little use of these authorities: It has contracted with private attorneys in only two states; it has not contracted with private lenders or with private collection agencies and is not actively considering doing so. The Farmers Home Administration Improvement Act of 1994 (P.L. 103-248, May 11, 1994) gave FSA discretionary authority to contract with private attorneys to assist in resolving delinquent farm loan accounts. The process for using the new contracting authority generally starts with a request for legal assistance from an FSA state office to FSA’s Farm Credit Programs at headquarters. Farm Credit Programs then refers the request to USDA’s Office of General Counsel (OGC), which after its review may refer the matter to the Department of Justice. Justice, in consultation with FSA and USDA’s OGC, reviews the situation to see if (1) the farm loan cases can be handled by the U.S. Attorney’s office that has jurisdiction for the area, (2) the cases can be referred to a private attorney under contract with Justice, or (3) FSA should use its authority to contract with a private attorney. FSA is required to obtain the approval of both OGC and Justice before it can enter into a contract with a private attorney. Once OGC and Justice have approved FSA’s request for contracting, FSA’s state office solicits bids and subsequently enters into contracts, with OGC providing legal advice to the contracting office. To date, FSA has contracted with private attorneys to obtain assistance in resolving delinquent farm loan accounts in only two states—Louisiana and New Jersey—and has no plans to expand its use in other states. Specifically, following the process described above, FSA’s Louisiana state office entered into contracts with nine law firms to handle foreclosure cases in October 1995. As of February 1998, two of the nine firms were no longer under contract. According to FSA’s state officials, the agency canceled one contract at the law firm’s request and the other because of noncompliance with the terms of the contract. Through February 10, 1998, FSA had referred a total of 156 foreclosure cases to the law firms. Sixty-four of these cases had been settled, with collections totaling $2.2 million; borrowers in another nine cases had filed for bankruptcy; and the remaining cases were ongoing. The cost of legal services was about $58,000, excluding the attorneys’ reimbursable expenses, such as court filing fees. Concerning New Jersey, USDA officials told us that FSA’s New Jersey state office contracted with a law firm after approval by the U.S. Attorney’s office with jurisdiction there. No farm loan cases in New Jersey had been referred to the private law firm, and the USDA officials said they anticipate that none will be. Little consideration has been given within USDA for additional contracting by FSA because its needs for legal assistance are being satisfied by Justice. In two states, Justice has been referring problem farm loan cases to private attorneys that it has under contract. Specifically, according to a Justice official, the Department has referred such cases in the middle district of Florida since March 1993 and the southern district since January 1994—both before FSA received its contracting authority. The Justice official also told us that farm loan foreclosure cases in the northern district of New York are being handled by private attorneys under contract with Justice. FSA and OGC officials said that FSA has additional legal needs in New York, which will probably also be met by referring cases to private attorneys under contract with Justice. The OGC official also said that USDA discussed FSA’s legal needs in Idaho with Justice, and, as a result, the U.S. Attorney’s office has increased legal action on farm loan cases in that state. Finally, state law can have a considerable bearing on FSA’s legal needs involving problem farm loan cases. Specifically, when state law provides for nonjudicial foreclosure—that is, when judicial process is not needed for a lender to foreclose the loan-security property of a delinquent borrower—FSA has less need for legal assistance than when state law requires judicial process in foreclosure cases. According to USDA officials, about half of all states allow nonjudicial foreclosure. The Federal Agriculture Improvement and Reform (FAIR) Act of 1996 (P.L. 104-127, Apr. 4, 1996) gave FSA discretionary authority to contract with private lenders to assist in servicing outstanding loans, including contracting for one or more pilot projects to test the concept. To date, FSA has not used this authority and is not actively considering its use. Agency officials told us that the new contracting authority is not needed for three reasons. First, they stated that USDA’s recent reorganization of various farm program functions and offices has resulted in an increased number of field staff available to service farm loan borrowers. While this may be true, we note that FSA continues to have problems in servicing farm loans. Specifically, our review of FSA’s internal control reviews during fiscal year 1997 found that the agency’s field officials do not always follow FSA’s own loan-servicing standards. For example, as table 3 shows, FSA’s rates of noncompliance on four key loan-servicing standards ranged from about 17 to 29 percent. The problems identified in FSA’s fiscal year 1997 internal control reviews were not unique. For example, FSA’s internal control reviews in fiscal years 1995 and 1996 showed a total 20.6-percent rate of noncompliance with the requirement that field staff analyze borrowers’ operations and assist in planning, a 20.3-percent rate of noncompliance with the requirement that annual chattel inspections be performed to ensure that security property is being maintained, and a 16.8-percent rate of noncompliance with the requirement that office and field visits with borrowers be documented to reflect adequate supervision. FSA’s officials acknowledged that the agency has a problem with loan servicing. They said that the agency’s field staffs, which now include people who had previously worked on USDA’s farm payment programs but not on the farm loan programs, need to be specifically assigned to work on farm loans and trained in credit matters. They anticipate that these actions will be taken in the future. The second reason cited by FSA’s farm loan officials for not contracting with private lenders is because they use other discretionary authority to contract with entities besides lenders for some servicing of farm loans, such as with local management consulting firms and accounting firms to review borrowers’ operations and financial reports and with local appraisal companies to appraise property used as security for loans. We confirmed that FSA has employed such contractors for some loan-servicing activities. Finally, FSA’s farm loan officials said they have not contracted with private lenders because such contracting would increase their cost of operating the farm loan programs. FSA has not, however, estimated either the cost or potential benefits of loan-servicing assistance. The statutory provision authorizing contracting with private lenders required USDA to report to the Congress by September 30, 1997, on its experience in using contracts. USDA did not file this report because it did not contract with any private lenders for loan-servicing assistance. The FAIR Act also gave FSA discretionary authority to contract with private collection agencies to assist in collecting on unpaid accounts. However, as of March 1998, FSA had not contracted with private collection agencies for assistance, and FSA’s officials are not actively considering such contracting. The officials said they are not using this authority because, before FSA can contract with private collection agencies, it must complete the Con Act’s servicing requirements, as discussed below, that apply to borrowers with delinquent loans. However, because this process often takes more than 180 days, FSA is required to transfer delinquent accounts to the Department of the Treasury for collection action, which precludes FSA from contracting for the services itself. This transfer is in accordance with the Debt Collection Improvement Act of 1996 (DCIA)—section 31001 of P.L. 104-134, Apr. 26, 1996—which was enacted shortly after FSA was given its authority to contract with private collection agencies. When a borrower misses a loan payment, the Con Act’s requirements and FSA’s implementing regulations provide the following process for servicing the delinquent loan. Specifically, when a borrower is 90 days past due on a scheduled payment, FSA is to formally notify the borrower of its available loan-servicing options, such as the possibility of restructuring the outstanding loans. Generally, the delinquent borrower has 60 days to apply for servicing. If the borrower applies for restructuring, FSA has 90 days to process the application and to notify the borrower if he or she qualifies for restructuring. After notification, FSA has 45 days to offer to restructure the borrower’s debts. If the borrower did not qualify for restructuring, the borrower has 90 days to make a buyout payment to FSA that is based on the value of property used as security for the loan. When the borrower does not apply for servicing or when the borrower does apply but restructuring or a buyout does not occur, FSA will demand full repayment of the debt. If the payment is not made, FSA starts the liquidation phase of its servicing, which may include foreclosure action, and then debt settlement. FSA’s farm loan officials told us that the agency needs to complete its loan servicing for borrowers, including debt settlement, before it could consider private collection services. Furthermore, in debt settlement, only delinquent borrowers whose debts are charged off by FSA remain liable for the unpaid amounts of their loans and would be subject to debt collection action. However, because of the time required for servicing delinquent accounts, most of these borrowers have been delinquent for at least 180 days. As a result, under the DCIA, FSA is required to transfer the accounts to Treasury for collection action. Once the accounts are transferred, Treasury may refer these borrowers to one of the private collection agencies that it has contracted with for collection action. USDA officials also said that Treasury officials have told them to forward any accounts that are less than 180 days delinquent, after the loan security has been liquidated, for referral to one of Treasury’s collection contractors. At the time of our review, none of FSA’s farm loan accounts had been or were ready to be transferred to Treasury, nor was Treasury ready to receive delinquent accounts. FSA officials told us that the agency has to complete two tasks before any accounts can be transferred. First, FSA has to modify the format of its automated farm loan record system because the current format does not meet Treasury’s requirements. Second, each borrower whose debt was charged off has to be reviewed by FSA’s field offices to ensure that the account qualifies to be transferred—for example, the borrower is not in litigation, foreclosure, or bankruptcy, and the statute of limitations on pursing collections has not expired. FSA officials estimated that both the modification of their automated system and the review of the borrowers would be completed by September 1998. We provided a draft of this report to USDA and met with Department officials to obtain their comments. These officials included the Farm Service Agency’s Deputy Administrator for Farm Credit Programs and the Director of the Loan Servicing and Property Management Division and the Office of General Counsel’s Associate General Counsel for Rural Development. The USDA officials generally agreed with the material contained in the report and offered technical corrections and suggestions for clarifying the report. We made these corrections and incorporated their suggestions as appropriate. We performed our work from January through April 1998 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix III. As agreed, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this letter. At that time, we will send copies of this report to the appropriate Senate and House committees; interested Members of Congress; the Secretary of Agriculture; the Administrator of FSA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. This appendix provides information on the Farm Service Agency’s (FSA) active direct farm loans in the five states with the highest amounts of delinquent debt. Tables I.1, I.2, and I.3 show the total amount of outstanding principal and the portion owed by delinquent borrowers at the end of fiscal years 1997, 1996, and 1995, respectively. The tables also provide this information for borrowers in two ranges of outstanding principal—those owing less than $500,000 and those who owe $500,000 or more. This appendix provides information on write-offs of direct farm loans by FSA in settling delinquent borrowers’ debts through the debt settlement process in the five states with the highest amounts of write-offs. Tables II.1, II.2, and II.3 show the total amount of debt written off for borrowers who have undergone debt settlements during fiscal years 1997, 1996, and 1995, respectively. The tables also provide this information for borrowers by two ranges of write-offs—those who received less than $500,000 and those who received $500,000 or more in debt forgiveness. As requested, our objectives were to assess (1) the levels of outstanding principal on active direct farm loans at the end of fiscal years 1995 through 1997, including the amounts owed by delinquent borrowers; (2) the amount of debt written off by FSA through the debt settlement process in fiscal years 1995 through 1997; and (3) FSA’s use of three statutory provisions enacted in the mid-1990s that authorize FSA to contract with private attorneys for legal assistance in resolving delinquent farm loan accounts, private lenders for assistance in servicing farm loan borrowers’ accounts, and private collection agencies for assistance in collecting delinquent farm loans. To address the first two objectives, we obtained and analyzed information in the computerized databases in FSA’s St. Louis Finance Office and in the agency’s various financial reports on the farm loan portfolio. As requested, this effort included compiling information on borrowers who owe less than $500,000 and those who owe $500,000 or more, the five states with the highest amounts of delinquent debt, borrowers who received write-offs of less than $500,000 and those who received write-offs of $500,000 or more, and the five states with the highest amounts of write-offs. We did not verify the accuracy of the information contained in these databases or reports. To address the third objective, we reviewed the three statutory provisions and their legislative histories. We interviewed FSA’s officials, including the Deputy Administrator for Farm Credit Programs, the Director of the Loan Servicing and Property Management Division, and the Director of the Financial Management Division; and the U.S. Department of Agriculture’s (USDA) Associate General Counsel for Rural Development. Additionally, concerning the authority to contract with private attorneys, we discussed contracting by the Department of Justice with its Director of Debt Collection Management; and we reviewed the executive memorandum for USDA and Justice on FSA’s contracting, USDA’s and FSA’s operating documentation, and statistical information we obtained from FSA’s Louisiana state office. On contracting with private collection agencies, we reviewed the requirements of the Debt Collection Improvement Act of 1996 concerning the transfer of delinquent accounts to the Department of the Treasury and guidance issued by Treasury on using the private collection agencies that it has under contract. Our work was performed from January through April 1998 in accordance with generally accepted government auditing standards. Charles M. Adams, Assistant Director Oliver H. Easterwood Jerry D. Hall Patrick J. Sweeney Larry D. Van Sickle The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the levels of outstanding principal on active direct farm loans at the end of fiscal years (FY) 1995 through 1997, including the amounts owed by delinquent borrowers and the amount of debt written off by the Farm Service Agency (FSA) through the debt settlement process in each of these fiscal years; and (2) FSA's use of three statutory provisions enacted in the mid-1990s that authorize FSA to contract with the private sector to resolve delinquent loans. GAO noted that: (1) the size of the Farm Service Agency's direct farm loan portfolio has decreased in recent years; (2) the outstanding principal on active farm loans totalled about $11.4 billion in September 1995, $10.5 billion in September 1996, and $9.7 billion in September 1997; (3) the percentage of the portfolio held by delinquent borrowers--those who were at least 30 days past due on loan repayment--also decreased; (4) in September 1995, delinquent borrowers held 40.7 percent of the outstanding principal on direct farm loans; the delinquency rates for 1996 and 1997 were 34.2 percent and 28.2 percent, respectively; (5) FSA wrote off about $380 million for almost 2,000 borrowers in fiscal year (FY) 1997 through its debt settlement process, which essentially represents the agency's final resolution of unpaid loans and generally occurs after loan-security property has been liquidated; (6) previously, the agency has written off about $860 million and $780 million in (FY) 1996 and 1995, respectively; (7) of the more than $2 billion that was written off during the 1995-1997 period, most--81.5 percent--was written off with no payments to the agency by the borrowers at the time of debt settlement; (8) the extent of these write-offs underscores the high risks associated with the agency's farm loans; (9) to date, FSA has made only limited use of one of the three new loan-servicing authorities it was given in the mid-1990s; (10) specifically, it has contracted with private attorneys to obtain legal assistance in resolving delinquent farm loan accounts in two states and has no plans to expand its use in other states; (11) in regard to the other two authorities, FSA has not contracted with private lenders or with private collection agencies and is not actively considering such contracting; and (12) agency officials said they have not used these new contracting authorities more extensively because, among other things, they can obtain assistance from the Department of Justice and the Department of the Treasury or they can perform the servicing functions with their own personnel.
SOCOM and the Army are purchasing two separate active infrared countermeasure systems to protect U.S. aircraft. They plan to spend a total of approximately $2.74 billion, including about $2.475 billion for 815 ATIRCM systems and associated common missile warning systems and about $261 million for 60 DIRCM systems and its own unique missile warning system. In addition, there are many other potential customers for an active infrared countermeasure system, such as Air Force, Navy, and Marine Corps aircraft that have not yet been committed to either ATIRCM or DIRCM. SOCOM and the Army both have a need for an effective integrated infrared countermeasure system capable of defeating infrared guided weapon systems. The Army considers this capability especially critical to counter newer, more sophisticated, infrared guided missiles. Likewise, SOCOM has established an urgent need for a near-term directional infrared countermeasure system capable of countering currently deployed infrared guided missiles. To meet its urgent need, SOCOM plans to exercise its first production option for 15 DIRCM systems in July 1998 and procure 45 additional systems during fiscal years 1998 and 1999. The Army expects to begin ATIRCM production in April 2001. Two generations of infrared missiles are currently deployed. First generation missiles can be defeated by current countermeasures, such as flares. Second generation infrared guided missiles are more difficult to defeat. More advanced infrared guided missiles are being developed that will have even greater capabilities against current countermeasures. To defeat infrared guided missiles, the ATIRCM and DIRCM systems will emit directed energy to decoy or jam the missile’s seeker. Both systems are composed of a missile approach warning system, a computer processor, a power supply, and energy transmitters housed in a pointing turret. After a missile is detected, the computer is to rotate the turret and point the transmitters at the missile. The transmitters are to then emit the directed energy. Congress and DOD have a long-standing interest in reducing proliferation of electronic warfare systems. By urging development of common systems, Congress expected to reduce the costly proliferation of duplicative systems and achieve cost savings in program development, production, and logistics. DOD agrees on the need for commonality, and its policy statements reflect congressional concerns about electronic warfare system proliferation. DOD policy states that prior to initiating a new acquisition program, the services must consider using or modifying an existing system or initiate a new joint-service development program. DOD policy also requires the services to consider commonality alternatives at various points in the acquisition process. Joint electronic warfare programs and increased commonality among the services’ systems results in economy of scale savings. Buying larger quantities for common use among the services usually results in lower procurement costs. Similarly, lower support costs result from a more simplified logistics system providing common repair parts, maintenance, test equipment, and training. For example, under Army leadership, a common radar warning receiver was acquired for helicopters and other special purpose aircraft of the Army, Marine Corps, and Air Force. In addition, a follow-on radar warning system for certain Army and Marine Corps special purpose aircraft and helicopters was jointly acquired with savings estimated by Army officials of $187.7 million attributable to commonality benefits. The ATIRCM and DIRCM systems will initially have one key difference in technological capability. The DIRCM system will rely on existing flash lamp technology to defeat all currently deployed first and second generation threat missiles. (A flash lamp emits a beam of light energy to confuse the missile’s seeker.) The Army’s ATIRCM system will also be fielded with a flash lamp but it will also have a laser. According to SOCOM officials, after the flash lamp-equipped DIRCM is fielded, they plan to upgrade the DIRCM system with a laser that has completed development and is already in production. As described later in this report, the upgraded DIRCM system could be available around the same time as the ATIRCM system. Furthermore, the DIRCM laser could be the same as the one used in ATIRCM, according to DOD officials. The Army’s cost and effectiveness analysis used to justify the ATIRCM system indicates that with a laser upgrade, DIRCM could provide capability equal to the ATIRCM. The two systems will have a total of three different size turrets. According to DOD and contractor officials, the size of the turret matters because larger aircraft present larger targets and must apply more energy to decoy an incoming missile’s seeker. A larger turret can direct more of the flash lamp’s energy. The larger the amount of directed energy, the greater the likelihood the missile will become confused as to the actual location of the target aircraft. The DIRCM turret, to be used on SOCOM C-130s, is the largest of the three. The United Kingdom intends to use the larger DIRCM turret on its larger aircraft and a smaller turret for its helicopters and smaller aircraft. The ATIRCM turret is between the two DIRCM turrets in size. Since the ATIRCM turret will also have a laser, however, DOD acquisition officials believe it will ultimately be more effective than any system equipped only with a flash lamp. Both the DIRCM and ATIRCM programs are experiencing delays that have moved their projected availability dates significantly closer together. However, DOD has not yet taken advantage of the schedule changes to determine if one system will be more cost-effective than the other and if it can achieve significant savings by procuring only one system to protect all its aircraft. SOCOM plans to exercise the first of three production options and buy 15 DIRCM systems in July 1998. These systems will not be equipped with lasers. Production funds are projected to be included in the fiscal year 2001 budget for the DIRCM laser upgrade. Production of ATIRCM is to begin in April 2001. SOCOM officials maintain that because of their urgent need they cannot wait for the laser-equipped ATIRCM. However, the difference in the time frames for beginning production can be misleading. DIRCM is scheduled to go into production before operational testing begins, while the ATIRCM is not scheduled to begin production until operational testing is completed. If both DIRCM and ATIRCM production begin immediately after their respective operational tests, DIRCM’s production is delayed until April 2000 and ATIRCM is moved up to January 2001. As a result, the systems will start production within 9 months of each other. Additionally, DIRCM, with a laser upgrade, is projected to be available in 2001, about the same time as ATIRCM with a laser. The Army is developing ATIRCM and the United Kingdom with SOCOM is developing DIRCM to work on a variety of aircraft, including some that are the same or similar. (See table 1.) For example, the United Kingdom plans to use the DIRCM system on the CH-47 Chinook helicopter while the Army plans to use ATIRCM on the Chinook. By varying the size of the turret, the United Kingdom intends to use DIRCM on aircraft of a wide range of sizes, from its very large, fixed-wing C-130s to small rotary wing aircraft such as the Lynx. Although the Army currently has no plans to install ATIRCM on fixed-wing aircraft the size of C-130s, it too will be placing its system on a wide range of aircraft from the very large CH-47 heavy lift helicopter, to the small OH-58D helicopter. If development of both systems is successful, therefore, the Army and the United Kingdom will prove that ATIRCM and DIRCM provide redundant capability for many aircraft. In addition to those SOCOM and Army aircraft identified as platforms for DIRCM or ATIRCM, there are many potential Air Force, Navy, and Marine Corps aircraft that are not yet committed to either system. These include large fixed-wing aircraft of the Air Force, as well as 425 future Marine Corps V-22 aircraft and the Navy’s SH-60 helicopters. DOD’s plans to acquire infrared countermeasure capability may not represent the most cost-effective approach. While we recognize SOCOM’s urgent need for a countermeasure capability in the near term, we believe that DOD can satisfy this need and meet the Army’s needs without procuring two separate systems. Specifically, proceeding with procurement of the first 15 DIRCM systems beginning in July 1998 appears warranted. However, continued production of DIRCM may not be the most cost-effective option for DOD since the Army is developing the ATIRCM system, which will have the same technology, be available at about the same time, and is being developed for the same or similar aircraft. We, therefore, recommend that the Secretary of Defense (1) direct that the appropriate tests and analyses be conducted to determine whether DIRCM or ATIRCM will provide the most cost-effective means to protect U.S. aircraft and (2) procure that system for U.S. aircraft that have a requirement for similar Infrared Countermeasure capabilities. Until that decision can be made, we further recommend that the Secretary of Defense limit DIRCM system procurement to the first production option of 15 systems to allow a limited number for SOCOM’s urgent deployment needs. In written comments on a draft of this report, DOD concurred with our recommendation that the appropriate tests and analyses be conducted to determine whether ATIRCM or DIRCM will provide the most cost-effective protection for U.S. aircraft. According to DOD, the results of such analyses were completed in 1994 and 1995 and showed that both systems were the most cost-effective: DIRCM for large, fixed-wing C-130 aircraft and ATIRCM for smaller, rotary wing aircraft. However, as a result of events that have occurred in both programs since the analyses were conducted in 1994 and 1995, DOD’s earlier conclusions as to cost-effectiveness are no longer necessarily valid and a new analysis needs to be conducted as we recommended. For example, the 1994 cost- and operational effectiveness analysis conducted for SOCOM’s C-130s concluded that DIRCM should be selected because it was to be available significantly sooner than ATIRCM. As our report states, the DIRCM schedule has slipped significantly, and by the time the planned laser upgrade for DIRCM is available, ATIRCM is also scheduled to be available. Furthermore, the 1994 analysis justifying DIRCM concluded that ATIRCM would be a less expensive option and did not conclude that DIRCM would be more effective than ATIRCM. Thus, the question of which system would be most cost-effective for SOCOM’s C-130s is a legitimate issue that should be addressed by DOD in a new cost-effectiveness analysis before SOCOM commits fully to DIRCM. In addition, the Army’s 1995 cost- and operational effectiveness analysis justifying ATIRCM also concluded DIRCM could meet the Army’s rotary wing requirement if DIRCM’s effectiveness were to be improved by adding a laser. As our report notes, DOD now plans to acquire a laser as an upgrade for DIRCM. Thus, whether DIRCM or ATIRCM would be most cost-effective for the Army’s rotary wing aircraft remains a legitimate and viable question that DOD should reconsider. Further, in 1994 and 1995, when DOD conducted the prior cost-effectiveness analyses, effectiveness levels for DIRCM and ATIRCM had to be assumed from simulations because no operational test results were available at that time. Operational testing, including live missile shots against the DIRCM system, is scheduled to begin in the summer of 1998 and ATIRCM testing is scheduled for 1999. In the near future, then, DOD may be in a better position to know conclusively how effective DIRCM or ATIRCM will be and this should be taken into consideration in a new cost-effectiveness analysis. DOD did not concur with a recommendation in a draft of this report that one system be procured for all U.S. aircraft, arguing that one system cannot meet all aircraft requirements. We have clarified our recommendation by eliminating the word “all”. Our intent was to focus this recommendation on U.S. aircraft having a requirement for advanced infrared countermeasure protection, such as that to be provided by DIRCM or ATIRCM. For those aircraft that have an advanced infrared countermeasure requirement, we reiterate that the United Kingdom plans to use the DIRCM system on a wide variety of fixed- and rotary wing aircraft of many shapes and sizes, and the Army plans to use ATIRCM on a wide variety of rotary wing aircraft, as well as the fixed-wing CV-22. Thus, DOD should reconsider whether DIRCM or ATIRCM could provide the advanced infrared countermeasure protection necessary to meet the multiple U.S. aircraft requirements. In commenting further on its belief that one system cannot meet all U.S. aircraft requirements, DOD also stated that (1) the SOCOM DIRCM is too heavy for Army helicopters, (2) ATIRCM’s smaller turret drive motors are not designed for the increased wind in SOCOM C-130 applications, and (3) ATIRCM will not emit enough Band I and II jamming energy to protect SOCOM’s C-130s. We agree that the SOCOM DIRCM is too heavy for Army helicopters, but point out that the DIRCM contractor is designing a smaller DIRCM turret for the United Kingdom’s helicopters that would not be too heavy for the Army’s helicopters. DOD has never planned for DIRCM or ATIRCM to be the only means of protection for its aircraft from infrared guided missiles. Other systems are available to DOD to help protect against threat missiles, including those in Bands I and II, and these alternatives should be considered for use in conjunction with DIRCM or ATIRCM as DOD tries to determine how to protect its aircraft in the most cost-effective manner. DOD also did not concur with our recommendation that it limit initial DIRCM production to the first 15 units to begin filling its urgent need and to provide units to be used for testing and analysis before committing SOCOM’s entire fleet of 59 C-130s to the DIRCM program. DOD maintained that SOCOM’s remaining C-130s would remain vulnerable to missile threats such as the one that shot down a SOCOM AC-130 during Operation Desert Storm if any production decisions were delayed. We continue to believe that the additional analysis needs to be conducted before any DIRCM production decisions beyond the first one are made. More than 7 years have passed since the unfortunate loss of the SOCOM AC-130 and its crew in 1991. During that time, DOD delayed the first DIRCM production decision several times. The resolution of the technical problems causing these schedule slips can only be known through successful testing and implementation of our recommendation would allow units to be produced for testing. Finally, we agree with DOD that SOCOM’s need is urgent and believe that the best way to begin fulfilling the urgent need while determining whether DIRCM or ATIRCM is the more cost-effective system for C-130s is to limit DIRCM production to only the first 15 systems. To develop information for this report, we compared and examined the Army’s and the SOCOM’s respective plans and proposed schedules for acquiring the ATIRCM and DIRCM systems. We obtained acquisition and testing plans and the proposed schedule for acquiring and fielding the systems. We compared these plans to legislative and DOD acquisition guidance and to the results of past DOD procurements. We discussed the programs with officials of the ATIRCM Project Office, St. Louis, Missouri, and the DIRCM Project Office, Tampa, Florida. Also, we visited with Lockheed-Sanders, the ATIRCM contractor, and Northrop-Grumman, the DIRCM contractor, and discussed their respective programs. We conducted our review from August 1996 to December 1997 in accordance with generally accepted government auditing standards. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report. A written statement must also be submitted to the Senate and House Committees on Appropriations with an agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to appropriate congressional committees, the Under Secretary of Defense for Acquisition and Technology, the Secretary of the Army, the Director of the Office of Management and Budget, and the Commander of the U.S. Special Operations Command. We will also make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Danny Owens, Wendy Smythe, Charles Ward, and Mark Lambert. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Army's Advanced Threat Infrared Countermeasure system (ATIRCM) and the U.S. Special Operations Command's (SOCOM) Directional Infrared Countermeasure (DIRCM) system to determine whether the Department of Defense (DOD) is justified in acquiring both systems. GAO noted that: (1) DOD may be able to achieve sizable savings by procuring, supporting, and maintaining only one active infrared countermeasure system to protect its aircraft from infrared guided missiles; (2) despite congressional emphasis on, and DOD's stated commitment to, commonality, SOCOM and the Army are acquiring two separate countermeasure systems that eventually will have the same laser effect technology; (3) DOD should determine which system is more cost-effective and procure that one to protect its aircraft; (4) if DIRCM is determined to be more cost-effective, the ATIRCM program should be terminated; and (5) if ATIRCM is determined to be more cost-effective, no additional DIRCM systems should be procured beyond those planned to be procured in July 1998 to meet SOCOM's urgent need.
DOE has numerous sites and facilities around the country where the department carries out its missions, including developing, maintaining, and securing the nation’s nuclear weapons capability; cleaning up the nuclear and hazardous wastes resulting from more than 50 years of weapons production; and conducting basic energy and scientific research, such as mapping the human genome. DOE relies on contractors to operate its facilities and accomplish its missions. This mission work is carried out under the direction of NNSA and DOE’s program offices, including the Offices of Environmental Management and Science. The Federal Acquisition Regulation provides the basic guidelines that all federal agencies must follow in planning for and carrying out the process for awarding contracts. DOE, like most federal agencies, has supplemental regulations that it has set forth in the Department of Energy Acquisition Regulation to recognize requirements unique to the department, such as maintaining nuclear safety and security. Furthermore, in the mid-1990s, DOE’s Office of Procurement and Assistance Management developed the DOE Acquisition Guide that provides further information on contract award planning activities, including meeting federal requirements for competition, establishing evaluation criteria, and selecting the appropriate contract type. In planning and carrying out competition and award of contracts, there are three main phases, and each phase generally includes several steps. Planning. This phase involves identifying the mission need for the contract, alternatives for obtaining the goods or services, estimated costs, and any program or technical risks. Federal regulations state that the agency should use an integrated team approach—involving all personnel responsible for significant aspects of the contract award process—to developing the plan and associated milestone schedule. The plan should also identify the appropriate contract type for the work, considering the alternatives and risks. In addition, a selection official is designated, and an evaluation board is formed to carry out the contract award process. Developing the solicitation. Generally, the evaluation board, with input from technical and other advisors, develops a statement of work for the contract and criteria that will be used to evaluate proposals submitted by companies. During this phase, DOE may also do market research to determine if sources capable of meeting the agency’s needs exist. Finally, DOE issues the request for proposals that describes the work to be done, provides instructions to companies on how and when to submit their proposals, and describes the criteria that will be used to evaluate the proposals and select a contractor. Evaluating proposals and awarding the contract. Once proposals are received, the evaluation board evaluates the proposals against the established criteria. Evaluation criteria may differ for each contract, but generally include factors such as a company’s proposed technical approach; past performance on DOE, other federal agency, and commercial contracts; the proposed cost; and the qualifications of the key personnel identified in the proposal. During this phase, DOE may determine that it is necessary to hold discussions with individual companies about issues raised during the evaluation of their proposals. Based on a written report of the evaluation board that outlines the strengths and weaknesses of each proposal, the selection official should select a proposal that represents the best value for the government. The DOE contracting officer, who has generally provided contracting support to the evaluation team, then awards the contract. The Competition in Contracting Act of 1984 provides statutory authority for GAO to decide cases involving bid protests. GAO’s bid protest decisions address specific allegations raised by unsuccessful companies claiming that particular contracting actions were contrary to federal laws and regulations. A bid protest may be filed by an “interested party,” defined as an actual or prospective bidder or offeror with a direct economic interest in the contract. Unless the protest is dismissed at the outset because it is procedurally or substantively defective, such as if the protest was not filed in a timely manner, the contracting agency is required to file a report with GAO responding to the protest. GAO will consider the facts and legal issues raised and is required to resolve the protest not later than 100 days from the date the protest was filed, either by sustaining the protest and recommending that the agency take corrective action, or by denying or dismissing the protest. DOE’s Office of Procurement and Assistance Management and NNSA’s Office of Acquisition and Supply Management establish policies and guidance for awarding contracts according to federal and departmental regulations. Officials from DOE’s programs, such as NNSA and the Offices of Environmental Management and Science, usually manage the contract planning and award process since they are most knowledgeable about the work needed and are ultimately responsible for managing the work and overseeing the contracts. Other organizations within DOE work with the program officials to plan and carry out the contract award process by providing contracting, legal, and technical advice and assistance. DOE’s Offices of General Counsel and Procurement and Assistance Management review, evaluate, and approve key documents during the entire process, such as the initial plan for the contract award process and the request for proposals. Most of DOE’s 31 contract awards that we reviewed experienced delays when comparing the planned and actual award dates for the contracts. The 24 contracts awarded with competition faced delays of months to years; of the 7 contracts awarded without competition, 6 were awarded on time or nearly so. Delays occurred throughout the contract award process due to factors such as time spent repeating steps of the process to resolve a company’s formal protest, or because of unanticipated complications that affected DOE’s ability to adhere to the planned schedule. For the 31 contracts we reviewed, most were awarded months to years after the planned award date. To determine whether contract awards were delayed, we compared the planned award date with the date DOE actually made the award. None of the 24 competitive contracts we reviewed was awarded according to DOE’s planned schedule, and nearly all were delayed by months or years. Only 1 of these competitive contract awards was made within 1 month of the planned award date. The remaining awards experienced more lengthy delays of from 2 months to over 2 years. In contrast, delays were less prevalent among the noncompetitive contract awards included in our sample. Six of these 7 contracts were awarded on time or within 1 month of the planned award date. The other contract was awarded 2 months late. (See fig. 1.) While the sample of contract awards reviewed did not allow us to generalize to all of DOE’s contract awards, they represent about 73 percent of the department’s total contracting dollars for awards of $5 million or greater during fiscal years 2002 through 2005. Further information on our scope and methodology is presented in appendix I. eliminate parts of the process in which delays typically occurred on the competitively awarded contracts we reviewed. For example, for a noncompetitively awarded contract, DOE would not need to evaluate multiple proposals because it had already identified a company with the qualifications and experience necessary to perform the work. DOE generally awarded the seven contracts we reviewed without competition because of an urgent need to obtain the goods and services and to ensure that the contractors could start work quickly. This is consistent with one of the exceptions to the requirement for full and open competition allowed by federal regulations. Nevertheless, since federal regulations generally require that agencies use full and open competition, the contracts awarded competitively better reflect the timeliness of DOE’s overall contract award process. The schedules for the contract awards we reviewed showed that DOE planned to take from about 1 month to up to 2 years to award its contracts. Variation in the length of the planned schedule may depend on several characteristics of the contract award process, such as whether or not the contract was awarded competitively or noncompetitively, the dollar value of the award, or the complexity of the work. DOE officials indicated that facility management contracts tend to be more complex and, therefore, can take longer and could be more prone to delay, but we found some facility management contract awards were less delayed than some smaller dollar value, more straightforward contract awards. For example, the $4.8 billion facility management contract to operate the Idaho National Laboratory took just over 1 year to award and was delayed about 4 months, while a $5 million administrative support services contract at the Savannah River site took almost 2 years to award and was awarded 1-1/2 years later than the planned award date. Delays in awarding contracts occurred when DOE modified its approach after the start of the contract award process. DOE frequently had to modify its approach to take corrective action in response to bid protests, or because DOE took more time than planned to evaluate proposals, hold discussions with companies that had submitted proposals, or obtain headquarters approval for key documents. Taking corrective action due to bid protests. One cause of delay for DOE’s contract awards was the time the department spent reworking portions of the process in response to companies that filed bid protests with GAO. Companies generally filed protests because they believed that DOE had violated federal acquisition regulations in carrying out the contract award process, most typically in evaluating or soliciting proposals. Rework in response to a bid protest was initiated in one of two ways: (1) GAO found fundamental flaws in the contract award process, sustained the company’s protest, and recommended that DOE take corrective action or (2) DOE initiated corrective action after conducting an assessment of the validity of the company’s protest, without waiting for a formal GAO decision. In either case, rework can include canceling or revising the solicitation, reevaluating proposals, or accepting proposals from firms that were previously excluded from the competition. Rework resulting from corrective action can significantly delay the contract award. For the 31 contracts we reviewed, 10 resulted in bid protests filed with GAO. Protests on 8 of these were denied or dismissed by GAO, or withdrawn by the protestor. DOE agreed to take corrective action on the remaining 2, once as a result of a protest being sustained by GAO, and once before GAO issued a decision. Delays due to such corrective action added up to 2 years to the contract award process. For example, a contract worth $1.6 billion to clean up the River Corridor area at DOE’s Hanford, Washington, site was delayed more than 1-1/2 years while DOE took corrective action in response to a bid protest. GAO sustained a bid protest on the initial award because DOE failed to properly apply its criteria for evaluating the cost proposals. In response, DOE revised and reissued the solicitation, evaluated new proposals, and selected a new contractor about 2-1/2 years after the initial planned award date. In another example, a NNSA contract award for administrative support services was delayed almost 2 years when NNSA took corrective action in response to a bid protest. Corrective action involved revising and reissuing the solicitation. However, the NNSA evaluation board then had difficulties completing cost evaluations of the revised proposals, adding more time to the process, and finally awarded contracts to all three companies almost 2 years later than the planned award date. Evaluating proposals. DOE generally estimated between 1 and 6 months to review and evaluate proposals for its competitive contract awards. However, for the 24 competitively awarded contracts we reviewed, it took on average 5 months longer than planned for this part of the process, accounting for over half of the overall delay in carrying out the process. For example, DOE was planning to award multiple contracts to provide environmental remediation and deactivation of facilities across the DOE complex. DOE estimated that it would receive about 20 proposals and allowed about 3 months for evaluating the proposals and selecting the contractors. The department actually received more than 100 proposals and was delayed over 9 months, in part, due to the amount of additional time required to review the larger than expected number of proposals. In another example, the Savannah River Operations Office allowed about 8 weeks to evaluate proposals for an administrative support services contract at the site. However, the evaluation board encountered several difficulties during the evaluation process, including concerns about the wide range of the cost estimates in the proposals. These concerns necessitated a more detailed cost evaluation, resulting in the evaluation taking 5 months longer than planned. Holding discussions with companies submitting proposals. For eight of the contracts we reviewed, DOE had to modify its approach to holding discussions. In some cases, the planned schedule did not include any time for discussions and, in other cases, the time DOE took to discuss proposals with the companies submitting them exceeded the time estimated in the plan. For these contracts, actual time to discuss proposals exceeded the planned time by an average of 3 months. For example, two contracts for conceptual design of a waste processing facility at the Savannah River site were awarded 4 months late because DOE initially intended to make the contract awards without discussions and, therefore, did not plan time in the original schedule for this part of the process. However, after the contracting process was under way, the department decided it was necessary to conduct discussions on the proposals because none of the companies had provided all of the cost and pricing data necessary to fully evaluate the proposals. Similarly, a contract for medical services for the Hanford site was delayed more than 5 months, in part because the department held discussions with companies concerning their cost proposals and technical approach but did not plan for this activity in the schedule. Obtaining headquarters approval of key documents. Awarding of DOE contracts was also delayed due to obtaining headquarters approval of key documents. In five contract awards we reviewed, delays in obtaining the required review and approval from DOE headquarters officials caused an average 5-month delay in contract award. For example, for a contract to manage and operate DOE’s Berkeley National Laboratory, DOE allowed 1 month for headquarters review and approval of the draft solicitation. However, headquarters review and approval took 5 months, partly due to the complexities of competing a contract that had been in place for decades. As a result, the contract award was 4 months late. In another example, the Oak Ridge Operations Office allowed 1 month for headquarters review and approval of a solicitation for a contract for medical isotope production. However, headquarters review and concurrence took about 1 year longer than anticipated because of additional requests for information from the Congress on DOE’s strategy for going forward with the work under this contract. Overall, the contract award was delayed nearly 2 years due to this lengthy review and the need for extensive discussions during the evaluation phase with companies that had submitted proposals. It is unclear the extent to which these delays in awarding contracts were avoidable. At times, delays in the process appeared to be outside DOE’s control and, therefore, more difficult to anticipate and manage, such as difficulty obtaining funding or to address concerns raised by the Congress or other stakeholders. For example, a $562 million contract for facilities and services to stabilize the department’s depleted uranium hexafluoride inventory at Portsmouth and Paducah was delayed after the Office of Management and Budget, late in the process, raised concerns about the need to build facilities at both locations. Dealing with this issue contributed to the delay of the contract award. While external factors such as this can impact the planned schedule, some delays are clearly avoidable, such as when DOE had to take corrective action in response to bid protests because the department failed to follow its contract award process. In such cases, better management and oversight of the contract award process would provide opportunities for improvement. According to the Director of DOE’s Office of Procurement and Assistance Management, DOE cannot plan for all possible events when developing a schedule for a contract award process and must respond to changing circumstances and incorporate additional steps into the award process when necessary. Delays in awarding contracts could increase costs both to DOE and the companies that are competing for the potential contracts and could also affect whether companies are willing to compete for DOE contracts in the future. Specifically, awarding a contract months or years after the date anticipated in the contract award schedule could increase the overall costs to the government. DOE staff involved in a contract award may include officials from the program offices, such as the Offices of Environmental Management or Science; contracting and legal specialists; and nuclear safety and regulatory experts. Both salary and travel costs associated with keeping these integrated teams together for the duration of the contract award process can be significant. However, these costs cannot be readily quantified since the department does not accumulate or track the costs associated with individual contract awards. Nevertheless, DOE officials acknowledge that there may be additional costs associated with contract awards that are delayed. The added time and resources spent on delayed contract awards may impact the department’s ability to meet its management and oversight responsibilities, including planning for and carrying out other contract awards. According to DOE officials, prolonged delays in awarding contracts could also increase costs for companies that are submitting proposals. Companies competing for DOE contracts invest considerable time and resources in developing their proposals. However, since companies generally consider their proposal costs to be proprietary information, we could not quantify those costs. For DOE’s contracts, once companies submit proposals, DOE generally requires the companies to ensure that key personnel identified in the proposals continue to be available until the award decision has been made. If a contract award is delayed, companies may have to keep key personnel available for months or even years, which could increase their costs and/or represent a possible lost opportunity to the companies since those key personnel may not be available for other work. For example, our review of file documents found that several small businesses competing for a contract to provide infrastructure services at DOE’s Paducah, Kentucky, site expressed frustration over the costs of keeping key personnel available while DOE took nearly 1 year after the planned award date to complete the process and award the contract. The length of time and increased costs to carry out a contract award may also discourage some companies from competing for DOE work, thus potentially affecting the extent of competition in the future. Ensuring adequate competition for contracts can help obtain the best value for the government. According to DOE officials who oversee the contracting process, they have not seen a decline in the number of companies willing to compete for DOE’s contracts, but the officials agree that this is a potential concern, and it has been a concern in the past. For example, a January 2001 report that analyzed the contractor base for DOE’s environmental cleanup work stated that the number of companies that were both willing and able to compete for major remediation contracts had declined over the past several years. The report concluded, among other things, that promoting competition among a large number of qualified companies was crucial to getting the best value for the government and accomplishing the environmental cleanup mission. Therefore, anything that could reduce the pool of potential companies competing for DOE’s work, such as an inefficient contract award process, would not be in the department’s best interest of fostering full and open competition for its work. When we began our review in July 2005, DOE officials generally were not addressing delays in awarding contracts because DOE’s data was showing that its contracts were being awarded in a timely manner. However, since late 2005, DOE has been implementing three main efforts to improve its contracting, partly in response to criticism by GAO and others. These efforts include (1) an initiative to address the DOE contracting weaknesses that led GAO to designate the area as high risk for fraud, waste, abuse, or mismanagement; (2) a plan to restructure DOE’s Office of Environmental Management to focus more attention on the contract award process; and (3) steps by NNSA and DOE’s Office of Procurement and Assistance Management to improve performance measures on the timeliness of contract awards. While these improvement efforts appear to be constructive and may also help to address delays in awarding contracts, we have two remaining concerns related to the timeliness of DOE’s contract awards. First, the efforts do not reflect a comprehensive approach within the department to improve the timeliness of contract awards. Second, even if DOE uses improved performance data to identify the causes of delays in awarding its contracts, the department does not have a systematic method to develop and disseminate best practices and lessons learned. When we began our work in July 2005, DOE officials generally were not taking steps to address the causes of delays in awarding contracts. One of the main reasons for this was that DOE’s performance data was generally showing its contracts were being awarded in a timely manner. Although our analysis uncovered delays in most of the 31 contract awards valued at $5 million or greater from fiscal years 2002 through 2005 that we reviewed, DOE’s performance data for the same period showed that most of its contract awards were timely. For example, the data for fiscal years 2003 and 2004 showed that, respectively, 88 and 86 percent of the contract awards were timely. According to DOE’s performance measures, a timely award meant that DOE had awarded the contracts within 150 days after receiving proposals from companies. An official from the Office of Procurement and Assistance Management who manages the performance data said that the department used 150 days as a timeliness standard because it represented a realistic yet challenging time frame for awarding the contracts. However, DOE’s performance data did not provide a complete picture of the department’s timeliness in awarding its contracts for two reasons. First, the data for those years did not include all of the contract award process. Instead, the performance data DOE used measured only the timeliness in carrying out the final stage of the process (the evaluation phase), from the point at which DOE received the proposals through contract award. Consequently, the performance data excluded the earlier steps of the process, from planning the contract award through the process of soliciting proposals from interested companies. For the 24 competitively awarded contracts valued at $5 million or greater we reviewed, the excluded steps accounted for over half of the total time that DOE typically spent awarding its contracts. As a result, some of DOE’s contract awards may appear timely, when, in fact, DOE awarded the contracts weeks or months after the planned award date. For example, in fiscal year 2003, DOE awarded a contract for laundry services at the Hanford site in Washington about 2 months after its planned award date. However, DOE considered the contract award to have been timely because the department was able to award the contract within 150 days of receiving proposals. Officials from the Office of Procurement and Assistance Management said DOE’s timeliness data are intended to measure the performance of contracting staff. Therefore, the officials said that the timeliness measure includes only the evaluation phase of the contract award process because that is the aspect of the process that would more likely be under the direct control of the contracting staff. Prior to the evaluation phase, officials said that contracting staff generally rely on program staff to develop key information, such as a description of the work to be carried out under the contract. Furthermore, according to DOE’s guidance, program office staff are generally responsible for planning the contract award process. However, we found that DOE’s contracting staff were generally involved throughout the entire contract award process as part of an integrated team approach. Furthermore, it is unclear to what extent contracting staff have direct control over any phase of the process. Second, DOE’s performance data were incomplete because data on DOE’s facility management contracts were excluded. For example, in fiscal year 2005, DOE competitively awarded seven facility management contracts valued at a total of about $10.4 billion. However, DOE excluded the data on these facility management contracts from its performance measures for timeliness. DOE officials from the Office of Procurement and Assistance Management said they excluded the facility management contracts from the department’s performance data because those contracts are particularly complicated to award. Unlike the contracts included in the performance measures for timeliness, the department could not determine an appropriate timeliness standard for the facility management contracts. As a result of excluding the facility management contracts, the performance data for fiscal year 2005 did not reflect the department’s timeliness in awarding about 97 percent of the nearly $11 billion in contracts that the department awarded that year. In contrast to DOE’s approach, we measured the department’s timeliness in awarding its contracts by comparing the planned award dates with the dates DOE actually awarded the contracts. We used the planned award date as the standard for determining whether contracts were awarded in a timely manner, rather than comparing the time it took DOE to award the contract after proposals were received with a fixed standard, such as 150 days. A fixed standard is arbitrary because the actual time needed to carry out a contract award process can differ greatly for legitimate reasons, depending on the circumstances of each individual contract award. Furthermore, in doing so, we included the facility management contracts because those contracts can represent a significant portion of the total value of DOE’s contract awards. Since late 2005, DOE has initiated three new efforts to improve its contracting processes. DOE is implementing these actions as part of its continuing efforts to improve its contracting processes and, in part, because of long-standing criticisms by GAO and others. Since 1990, GAO has designated DOE’s contract management, including project management, as an area at high risk for fraud, waste, abuse, and mismanagement. Our reviews have uncovered weaknesses in DOE’s management and oversight of its contracts which, in some cases, have undermined DOE’s ability to carry out its missions or dramatically increased its costs. More recently, criticisms have included the frequency of delays in awarding contracts and the need for reforms. For example, in congressional testimony in November 2005 and March 2006, the Assistant Secretary for Environmental Management cited the need for improvements in the department’s contract award practices. Furthermore, conferences on DOE environmental cleanup issues held in February and October 2005 and attended by DOE and contractor officials included presentations calling for a reevaluation of DOE’s contract award process. DOE’s recent efforts to improve its contracting include the following: Initiative to address DOE’s high-risk contracting practices. In November 2005, DOE drafted a plan aimed at improving the department’s contract and project management, including management of the contract award process. DOE developed the plan in response to an effort by the Office of Management and Budget that requires agencies to address areas identified by GAO as high risk for fraud, waste, abuse, or mismanagement. Actions included in the plan to improve the contract award process include conducting better oversight of the planning process, maximizing competition for contracts, and ensuring that contracts allow DOE to properly hold contractors accountable for their performance. To date, DOE has developed performance measures and targets for the focus areas in its action plan, as well as target dates for implementing the plan. As part of this initiative, in February 2006, DOE began to increase the frequency of training for new evaluation boards and selection officials and to provide this training at the beginning of the planning process. Although this plan does not specifically address the timeliness of contract awards, the overall effort to improve contracting and project management could help identify and address underlying causes for delays in awarding contracts. Initiative to restructure DOE’s Office of Environmental Management. Also in late 2005, the Assistant Secretary for Environmental Management announced plans to restructure the headquarters-based administrative functions within his program office. According to officials in the Office of Environmental Management, the reorganization is aimed, in part, at improving contract and project management by bringing these functions together under a new Deputy Assistant Secretary for Acquisition and Project Management. As the reorganization is implemented, the Office of Environmental Management, at the request of a congressional subcommittee, has also commissioned the National Academy of Public Administration to undertake a general management review of the program, including reviewing the program office’s policies and processes for awarding contracts and providing advice on improving their effectiveness. As the study progresses, DOE will receive interim reports and recommendations on how to implement the reorganization and increase its effectiveness, which may also result in improving the timeliness of contract awards. Initiative to improve performance data on contract timeliness. NNSA and DOE are implementing separate efforts to help them track contract awards against the planned dates for carrying out the process. NNSA’s effort is focused on a recently implemented data system that will help its contracting staff track ongoing contract awards against a detailed schedule of milestones. The requirement to input a detailed milestone schedule at the beginning of the contract award process applies to all of NNSA’s contracts, regardless of their dollar value. As staff carry out the process, the system will record the dates on which key steps were completed and document any delays from the schedule. The system also allows the contracting officials to generate reports on the status of ongoing contract awards, as well as analyze data on existing contract awards for evidence of systemic delays or other problems. As of April 2006, the data system was operational, and NNSA contracting officials have started using this milestone information to manage ongoing contract awards. Similarly, in March 2006, DOE’s Office of Procurement and Assistance Management began implementing plans to track the timeliness of DOE’s facility management or other high-visibility contracts against a detailed milestone schedule. According to the Director of the Office of Procurement and Assistance Management, representatives from his office, as well as other department officials responsible for awarding facility management contracts, will work to develop a realistic, but challenging, milestone schedule for carrying out the award process. According to the Director, revised DOE guidance will recommend that planning for a new facility management contract should begin no later than 2 years prior to the expiration of an existing contract. Once the milestone schedule has been established, his office will then track the contract award process to compare planned and actual key dates and provide monthly updates to the Deputy Secretary. As of April 2006, DOE had established schedules for the current and upcoming facility management contract awards, began monitoring the progress against those schedules, and began submitting monthly status reports to the Deputy Secretary. DOE’s efforts to improve its contracting are generally in the early stages of being planned or implemented, and not all of the efforts specifically address the timeliness of contract awards. Nevertheless, we believe these initiatives are a constructive step towards addressing long-standing weaknesses in the department’s contracting and improving its contract awards. As the department addresses the weaknesses and takes steps to improve its contracting, improved efficiency and effectiveness could also result in fewer delays in awarding contracts. However, as discussed below, we are concerned that the efforts may fall short in two main respects, which could limit the department’s effectiveness in improving the timeliness of contract awards. Standards for a well-functioning contract award process come from federal acquisition regulations and GAO’s framework for assessing the process at federal agencies. The framework integrates the views of federal government and industry experts on the characteristics and practices of a well-functioning contract award and management process, and it draws upon decades of experience within GAO in analyzing contracts and contracting organizations. Among the standards for a well-functioning contract award process set forth by the framework and regulations are that federal agencies should use knowledge from prior contract awards to assess the effectiveness of the contracting process and, in the interest of continuous improvement, have a systematic means for instituting best practices and lessons learned. DOE’s improvement initiatives, as they apply to the timeliness of contract awards, appear to fall short in both respects. First, DOE has not ensured that the efforts to develop better data on the timeliness of contract awards will include all of the department’s contract awards that require a written plan and schedule. While DOE and NNSA’s efforts to develop better timeliness data will include all of the department’s facility management contracts, as well as NNSA’s other contract awards, the rest of DOE’s contracts may not be included. This could be a considerable portion of DOE’s contracts valued at $5 million or greater. For example, of the contracts that the department awarded in fiscal years 2002 through 2005, nearly one-quarter of the total contract value, or $3.9 billion, may not have been included in DOE’s improvement efforts. These contracts may be excluded because they were neither facility management contracts, other DOE high-visibility contracts, nor contracts awarded by NNSA. The Director of DOE’s Office of Procurement and Assistance Management said that the tracking against milestone schedules includes only contracts that represent significant risk or are critical to carrying out DOE’s programs or missions. For example, the contracts that are currently being tracked include facility management contracts, other large cleanup contracts, and contracts that may also be of particular interest to headquarters, such as those for guard services. The Director added that it is not the intent of this effort to monitor the progress of all ongoing contract awards of $5 million or greater, and that he would expect contracting officers in the field locations to be accountable for contracts within their approval authority. Nevertheless, we believe that efforts to develop better timeliness data should include all contracts valued at $5 million or greater because DOE guidance generally requires written plans and schedules for these contracts. Second, the department has neither had a systematic approach for instituting best practices or lessons learned in its contract award process, nor is it clear that the department’s efforts will result in a more systematic approach in the future. As DOE develops better data on the timeliness of its contract awards and analyzes the data to identify and address systemic problems or underlying causes of delays, the department will be developing useful information that could help ensure that future contract awards benefit from the best practices or lessons learned. Moreover, federal regulations require agencies to have a process for ensuring that lessons learned are developed and used in planning future contract awards. Based on federal regulations and GAO’s framework, standards for a well- functioning contract award process would include a systematic approach for identifying and communicating lessons learned and best practices. Such an approach would include activities such as routinely evaluating performance data across a broad range of contracts to identify the extent to which the contract award process went well or encountered problems. In doing so, DOE would then look for trends or systemic problems and investigate the underlying causes. A systematic approach would also include a mechanism to ensure this information is readily available, in order to help officials who plan and carry out contract awards do so more efficiently. We did find some examples in which best practices and lessons learned were identified and in some cases shared within the department, but we did not find a systematic approach for doing so. For example, in a recent DOE evaluation of the problems that occurred in awarding contracts for environmental cleanup and infrastructure services at its facilities in Portsmouth, Ohio, and Paducah, Kentucky, DOE’s review confirmed that poor planning and management of the process caused delays of 6 months to 1 year in awarding the contracts. Some of the factors mentioned were lack of timely management direction in key areas, a lengthy headquarters review and approval process, and unrealistic schedules for the awards. However, DOE concluded that the factors leading to the delays were not typical for DOE’s contracts and did not represent a systemic problem. Therefore, DOE did not communicate the information as a systemic problem. In contrast, our analysis of 31 contracts valued at $5 million or greater indicated that these types of causes for delays were commonplace. In addition, best practices or lessons learned have been communicated at training sessions and presentations to staff carrying out contract awards, such as those offered by DOE’s Office of General Counsel. According to the DOE attorney who has led a number of them, the sessions are generally aimed at helping staff carry out the contract award process effectively and in accordance with federal regulations and DOE guidance. This may increase the department’s ability to avoid successful bid protests. Best practices or lessons learned have also been shared at annual procurement directors’ conferences held for officials who oversee the contract award process, with the most recent conference in October 2005 covering topics such as proper documentation of decisions made during the contract award process. However, several DOE officials that were responsible for contract awards said that these best practices or lessons learned were not consistently available or may have come too late in the planning for the contract award to be very beneficial. The officials added that there was no centralized source within DOE for best practices, but they could obtain lessons learned by contacting others within the department who had recently awarded contracts. For example, an official from the Office of Environmental Management said that, in planning the contract award process for DOE’s Savannah River site near Aiken, South Carolina, she traveled to DOE’s offices in Idaho Falls to meet with officials there who had just awarded contracts for environmental cleanup at the site, and management and operation of the Idaho National Laboratory. While she said the meetings were beneficial and she obtained lessons learned documents, she saw this process as the only way to obtain best practices or lessons learned from those contract awards. As another example, this same DOE official conducted a recent contract award process, and said she would have benefited from early training on how to properly document decisions made while evaluating the proposals that were submitted. She added that the training was not made readily available, and it was not mandatory. After submitting draft documents for headquarters review, she eventually received helpful information on how to improve the documentation, but this would have been more beneficial earlier in the process. DOE relies on contractors to carry out its environmental cleanup, scientific research, nuclear weapons management, and other missions vital to national health, safety, and security. Our analysis of the contracts we reviewed showed that DOE experienced delays in awarding many of these contracts to carry out its critical missions. It is unclear the extent to which these delays may have been avoidable with better oversight or management, but at least some of the delays were avoidable, such as when DOE had to rework parts of the contract award process to correct errors. While DOE eventually awarded these contracts, a contract award process that takes too long runs the risk of undermining the department’s efforts to obtain needed goods or services at the best value for the government. As DOE goes forward with its efforts to improve its contract award process, it will be important to ensure that performance measures address the timeliness of all of the department’s contract awards that require written plans and schedules, and that lessons learned and best practices are consistently shared with DOE staff that will plan for and carry out future contract awards. Improving the department’s performance in awarding contracts is an important part of ensuring that DOE’s contracts provide the best value to the government. To help ensure that DOE’s contract award process for contracts that require a written plan and schedule is efficient and effective and that DOE is obtaining the best value for the government, we recommend that the Secretary of Energy take the following two actions: Ensure that performance measures for the timeliness of the department’s contract awards include the entire contract award process from planning to contract award and include all of the department’s contracts that require a written plan and schedule. Establish a more systematic way of identifying lessons learned from past and current contract awards and sharing those lessons and best practices with the staff involved in planning and managing the department’s efforts to award its contracts. We provided a draft of the report to DOE for review and comment. In written comments, the Director of the Office of Management generally agreed with our recommendations, but questioned whether we intended the recommendations to apply to all of the department’s contracts. We modified the wording to clarify that our recommendations apply only to those contracts that require a written plan and schedule. In addition, DOE raised concerns about our nonprobability sample, our characterization of contract awards as delayed, and our view that DOE’s existing performance measures were incomplete. Regarding our sampling methodology, DOE said that the results of our nonprobability sample should not be used to reach overall conclusions about DOE’s contracting practices because the 31 contracts we reviewed represented less than 1 percent of the 5,000 contracts DOE awarded during fiscal years 2002 through 2005 and were not representative of the entire group of contracts. However, we believe DOE has mischaracterized our scope and methodology. First, we did not derive the sample of 31 contracts from the 5,000 total contracts that DOE awarded during the period. Instead, we focused our efforts on the 131 contracts awarded by DOE that were valued at $5 million or greater because such contracts generally required a written plan, including estimated dates. The 31 contracts we reviewed represented about 73 percent of the total dollar value of those 131 contract awards valued at $5 million or greater during the period and about 24 percent of the total number of these larger contract awards. Furthermore, our report clearly states that the results of a nonprobability sample cannot be used to make inferences about a population, and we did not draw conclusions about DOE’s overall contracting processes based on the sample results. DOE also took exception to our characterization of contract awards as delayed and our use of the department’s originally planned award date as an indication of timeliness. DOE said that the milestone schedule developed in the written plan is an internal planning tool that may change over time, and the department cannot factor in a schedule contingency for every possible event. However, as we reported, federal regulations state that the purpose of planning the contract award process is to ensure that the government obtains the needed goods and services in the most effective, economical, and timely manner. The regulations stress the importance of timeliness in awarding contracts and require that written plans for carrying out the contract award process include milestones for completing the steps in the process. Adherence to the milestones established in the written plan provides a consistent measure of timeliness and is one indicator of how well the contract award process is being managed. In addition, our report clearly states that some of the contracts DOE awarded later than planned were subject to external events and complications beyond DOE’s immediate control and recognizes that DOE’s milestone schedule cannot anticipate all possible events. Even so, we continue to believe that better management and oversight of the contract award process, including tracking actual against planned dates in the milestone schedule, could have prevented lengthy delays in awarding some of the contracts we reviewed. In an attachment to its letter, DOE also disagreed with our conclusions that the department’s performance measures for timeliness in awarding contracts were incomplete and, as a result, the department was not addressing delays in awarding contracts. As we reported, DOE has stated that its timeliness measures were intended to evaluate the performance of contracting staff and thus included only the parts of the contract award process under their direct control. DOE also said the department had been addressing delays in awarding contracts through its lessons learned process. We continue to believe that DOE’s performance measures for timeliness in awarding contracts could be improved by being more comprehensive. DOE’s performance measures included only the evaluation phase of the contract award process and excluded the facility management contracts, which represent a significant percentage of DOE’s overall contracting dollars. We also believe that DOE was generally not addressing these delays before our review began. The three initiatives DOE started in late 2005—to address GAO’s high-risk designation for DOE’s contract management, to restructure the Office of Environmental Management, and to improve performance data on contract timeliness—were the main efforts the department identified as steps to address delays in awarding contracts. DOE also provided technical comments in an attachment to the letter, which we have incorporated and summarized as appropriate. DOE’s comments on our draft report are included in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Energy. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In response to a congressional request, we determined (1) the extent to which the Department of Energy (DOE) adhered to its planned dates for awarding contracts and the factors contributing to any delays, (2) the impacts of any delays in awarding contracts, and (3) the extent to which DOE has taken steps to address delays in its contract award process. To conduct our work, we analyzed contracts valued at $5 million or greater from multiple DOE locations and interviewed contracting and program officials at DOE headquarters and field locations. We also reviewed federal laws and regulations on contracting, agency policies and guidance, GAO guidance on bid protests, and standards for assessing DOE’s management and oversight of its contract award process. To determine the extent to which DOE adhered to its planned dates for awarding contracts, we analyzed 31 contracts from seven DOE locations that (1) were affiliated with the department’s three largest component organizations—the Office of Environmental Management, Office of Science, and the National Nuclear Security Administration—and (2) had awarded a large share of the department’s contracts valued at $5 million or greater in fiscal years 2002 through 2005, including facility management contracts. We focused on the three largest component organizations because they comprised over 80 percent of DOE’s annual budget and because they were affiliated with about two-thirds of the locations departmentwide that had awarded contracts in the four fiscal years we reviewed. We considered the four most recent fiscal years, 2002 through 2005, in the scope of our study to help ensure that the results reflected DOE’s recent activity, while increasing the number of contracts that could potentially be selected for analysis. In fiscal years 2002 through 2005, the department awarded over 130 contracts valued at $5 million or greater, for a total value of about $16.3 billion. From seven DOE locations, we selected a nonprobability sample of 31 of these contracts, accounting for about $11.9 billion, or 73 percent, of the $16.3 billion (see table 1). Contracts in our sample ranged in value from about $5.5 million to $4.8 billion, and they included 7 of the 8 facility management contracts that the department competitively awarded during the four fiscal years we studied. In selecting our sample, we analyzed DOE contract award data to identify locations that had awarded a large share of the department’s contracts in fiscal years 2002 through 2005. To verify the completeness and accuracy of contract award data, we interviewed knowledgeable DOE officials at the locations we visited (six of the seven field locations from which we selected contracts). On the basis of our checks, we determined that these data were sufficiently reliable for identifying DOE locations with a large share of contracts. To determine whether DOE adhered to its planned dates for awarding the contracts in our nonprobability sample, we used contract file documents, obtained from the seven locations, to identify and then compare the planned and actual contract award dates. We considered contracts to be delayed if the actual award date was later than the planned award date. In most cases, we identified the planned award date in DOE’s schedule for completing key steps of the contract award process that DOE included in its formal written plan for awarding the contract. Because DOE guidance generally requires a plan and schedule when awarding contracts valued at $5 million or greater, the schedules in the written plan generally provided a consistent source for the planned award date. However, for 9 of the 31 contracts in our sample, we had to obtain the planned award date from another source because the associated plan and/or schedule was missing from the contract file. In these cases, we obtained the planned award date from references in other required file documents, such as the document used to justify noncompetitive awards or presentations by officials involved in awarding the contract. Similarly, for nearly all of the 31 contracts in our sample, we identified the actual award date from the award date written on the signed contract. However, for 2 of the 31 contracts, the signed contract was not available in the contract file, and we instead obtained the award date from the data system DOE uses to track its contract awards. In addition to determining whether the 31 contracts in our nonprobability sample were awarded by the planned date, we used contract file documents to calculate how long it took DOE to make the awards. We did so by calculating the elapsed time from the date DOE initiated the contract award process to the date it awarded the contract. For 23 contracts, we used the date of the formal written plan for carrying out the contract award process as the date DOE initiated the process. Although some planning steps typically preceded the preparation of the written plan, documentation of the steps was generally not consistently available in the contract files we reviewed. In contrast, the formal written plan was a required and more consistently available source. However, for 8 of the 31 contracts in our sample, we had to obtain the initiation date of the contract award process from a source other than the date of the written plan because the plan was undated or missing from the contract file. In these cases, we used contract file documents prepared during the early steps of the contract award process, such as correspondence from DOE officials approving the start of the process. Furthermore, we identified some of the reasons for delays DOE experienced in awarding contracts in our sample, in part, by contacting DOE officials involved in awarding the contract. We also reviewed contract file documents, such as meeting records and the descriptions of the contract award process included in key documents generated during the contract award process. When contract file documents were used to determine the causes of delays, two analysts involved in this study independently reviewed the documents to verify that the causes were reasonably identified. To further identify causes for delays, we analyzed documentation from bid protests filed with GAO in fiscal years 2002 through 2005, in addition to the bid protest guidance issued by GAO’s Office of General Counsel. We analyzed this documentation because it could potentially describe problems encountered by DOE during the contract award process. In correcting any problems, DOE may have repeated steps in the process, eventually delaying the contract award. When bid protest documents were used to determine whether (1) the contract award being protested was also in our sample of contracts that we analyzed for delays or (2) corrective action was agreed to by DOE or recommended by GAO as a result of the protest, two analysts involved in this study independently reviewed the documents to verify that the causes were reasonably identified. To determine overall impacts of any delays in awarding DOE’s contracts, as well as the extent to which the department has taken steps to address the delays, we interviewed headquarters contracting officials from the Office of Procurement and Assistance Management, as well as contracting and program officials from the department’s three largest DOE component organizations, and officials from the Office of Engineering and Construction Management, who oversee planning and management of capital projects. Also, we interviewed contracting and program officials at six of the seven field locations from which we selected contracts for our nonprobability sample as follows: Albuquerque Service Center (NNSA), New Mexico; Idaho Operations Office (EM and Nuclear Energy), Idaho; Oak Ridge Operations Office (EM, Science, NNSA), Tennessee; Portsmouth and Paducah Project Office (EM), Lexington, Kentucky; Richland Operations Office (EM), Washington; and Savannah River Operations Office (EM and NNSA), South Carolina. Furthermore, for contracts in our sample, we analyzed file documents for evidence of adverse impacts upon companies competing for the work resulting from any delays in carrying out the contract award process. We examined transcripts from some debriefing sessions that DOE held with companies after awarding the contract, as well as other documents, such as correspondence from companies or their representatives. To further understand the impacts of delays, we also examined documents related to DOE’s contract award for environmental cleanup of its Paducah, Kentucky, site, even though this contract was not part of our sample. DOE’s Office of Procurement and Assistance Management provided us with its analysis of lessons learned and the factors that delayed this contract award. We reviewed the contract file documents to better understand the context of this analysis. However, we excluded the contract from our sample because DOE awarded it in fiscal year 2006 and, therefore, it did not fit our selection criteria. As a result, we did not include the contract in calculations or figures showing the extent of delays in awarding DOE contracts. In addition, we reviewed federal and DOE regulations that govern contracting, DOE policy and guidance on contracting, and documents on the department’s assessment of its performance in carrying out its contract awards. We also reviewed GAO’s Framework for Assessing the Acquisition Function at Federal Agencies for determining the effectiveness of DOE’s management and oversight of the contract award process. We conducted our work from July 2005 to June 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Bill Swick, Assistant Director; Carole Blackwell, Nora Boretti, Kevin Jackson, Greg Marchand, Alison O’Neill, Jeff Rueckhaus, and Gretchen Snoey made key contributions to this report.
The Department of Energy (DOE), the largest civilian contracting agency in the federal government, spends over 90 percent of its annual budget on contracts to operate its facilities and carry out its diverse missions. Federal law and regulations outline the steps DOE must follow in planning and carrying out the contract award process and emphasize the importance of awarding contracts in a timely manner. Several of DOE's recent contracts have taken much longer than anticipated to award. GAO was asked to determine (1) the extent to which DOE has experienced delays in awarding contracts and factors contributing to delays, (2) the impacts of any such delays, and (3) the extent to which DOE has taken steps to address the delays. Delays in awarding DOE contracts occurred in most of the 31 contracts that GAO reviewed. In fiscal years 2002 through 2005, DOE awarded 131 contracts valued at $5 million or greater; the 31 of these contracts GAO reviewed were affiliated with DOE's three largest component organizations and represented about 73 percent of the dollars awarded. None of the 24 contracts awarded competitively was awarded by the date planned in DOE's schedule, with 23 of the contracts awarded between several weeks and 4-1/2 years later than the planned date. Of the 7 contracts awarded without competition, 6 were awarded on time or nearly so, with the remaining contract awarded 2 months late. Delays in awarding contracts occurred, in part, because DOE had to modify its approach after beginning the contract award process. At least some of the delays were avoidable, such as when DOE reworked contract awards to correct errors. Delays in awarding contracts could increase costs both to DOE and companies competing for DOE work. Because the department does not track its costs for awarding contracts, it was not feasible to quantify the impact of these delays. Companies competing for DOE's work may also face increased costs when contract awards are delayed, such as the costs associated with ensuring that key personnel identified in proposals continue to be available. Increased costs and delays may affect the willingness of companies to compete for future DOE work, although the actual impact is unknown. Until recently, DOE had not been addressing delays in awarding contracts because incomplete performance data indicated that most of the contracts were awarded in a timely manner. In late 2005, DOE began several efforts to improve its contract award process, including implementing improved measures of contract award timeliness and restructuring the Office of Environmental Management to strengthen management of the contracting process. However, two concerns could limit the effectiveness of these efforts: (1) the efforts do not encompass all of DOE's contract awards and (2) DOE does not have a systematic method of identifying and disseminating lessons learned and best practices from past or current contract awards.
The Special Supplemental Nutrition Program for Women, Infants and Children (WIC) was established in 1972 to provide food, nutrition education, and health care referrals to low-income pregnant and postpartum women, infants, and young children. The program is administered by FNS in conjunction with state and local health departments and related agencies. WIC is almost entirely federally funded. WIC is not an entitlement program; Congress does not set aside funds to allow every eligible individual to participate in the program. Instead, WIC is a federal grant program for which Congress authorizes a specific amount of funds each year. USDA provides funding for food and nutrition services and administration. Both funding and participation have increased each year since fiscal year 2000. Congress also typically provides for a contingency fund to ensure that adequate resources are available for the program if unanticipated costs arise. WIC participants typically receive food benefits in the form of vouchers or coupons that they redeem at authorized retail vendors to obtain, at no cost to the participants, certain approved foods, including infant formula. State WIC agencies then reimburse the retail vendors for the food purchased by WIC participants. Since 1989, state WIC agencies have been required by law to contain the cost of infant formula using a competitive bidding process to award sole-source contracts unless they can demonstrate that an alternative method would yield the same or greater cost savings. Manufacturers agree to provide a rebate to the state WIC agency for every can of infant formula purchased under the WIC contract, and the state awards the contract to the bidder offering the lowest net wholesale price after subtracting the rebate from the cost of infant formula. In exchange, the state provides the contract brand of infant formula to most infants enrolled in the program except those that are breastfeeding exclusively and those with a medical condition that requires the use of a non-contract infant formula. Rebates have become an important source of funding for the WIC program. In 2004, rebates totaled more than $1.6 billion and funded the benefits provided to about a quarter of WIC participants. Contracts for WIC infant formula are between states and infant formula manufacturers. States are responsible for issuing requests for bids and drafting contract provisions according to state contracting requirements. Typically, a WIC agency will draft a bid solicitation and obtain state approval for the contract language. Once the state approves the solicitation, the WIC agency will submit it to the FNS regional office for review. The regional office ensures that the contract adheres to all federal requirements and may suggest ways to improve the contract. FNS headquarters reviews the contract for final approval. After approval, the contracting process is conducted by the state. Infant formula comes in three physical forms: liquid concentrate, powder, and ready-to-feed. Infants may receive up to 31 13-ounce cans of liquid concentrate per month through WIC, or roughly the equivalent amount of powdered or ready-to-feed infant formula. Most infants are provided an infant formula that is covered by the state’s cost-containment contract. There are three categories of infant formula provided to WIC participants: Contract brand infant formula is produced by the contract manufacturer and is suitable for routine use by the majority of healthy full-term infants. These include milk-based and soy-based infant formulas and could include milk- and soy-based infant formulas enhanced with DHA and ARA, lactose-free infant formula, added-rice infant formulas, and easy-to-digest infant formulas. “Contract brand infant formula” also includes new infant formulas introduced by the contract manufacturer after the contract is awarded, with the exception of infant formula under the following “exempt” category. Exempt infant formula is represented and labeled for use by infants with medical conditions such as inborn errors of metabolism, low birth weight, or other unusual medical or dietary problems that require that they use a more specialized infant formula. Non-contract brand non-exempt infant formula is all infant formula produced by a manufacturer other than the contract manufacturer that is suitable for routine use by the majority of healthy full-term infants. FNS regulations require local WIC agencies to obtain medical documentation to provide all exempt infant formulas and all non-contract, non-exempt infant formulas. Medical documentation, for these purposes, is a determination by a licensed health care professional authorized to write medical prescriptions under state law. A licensed health care professional must make a medical determination that an infant has a medical condition that dictates the use of these infant formulas. State spending on infant formula depends on the discounts, in the form of rebates, that states receive from manufacturers, and use of non-contract infant formula. Rebates are the most important factor driving state spending on infant formula because such a large proportion of WIC infant formula is purchased under cost-containment contracts with manufacturers. According to infant formula manufacturers, the attractiveness of a WIC contract depends, at least in part, on factors over which states have some control, including the extent to which a WIC contract will help a manufacturer market its products to non-WIC consumers; state-level administration of WIC contracts; and the provision of powder, concentrate, and ready-to-feed infant formula to WIC participants. These factors, in turn, could affect whether or not a manufacturer bids on a contract and the level of rebates offered by the manufacturers. The provision of non-rebated infant formula also affects state spending. In 2004, states received rebates on approximately 92 percent of the infant formula provided to WIC participants, and saved, on average, 93 percent off the wholesale price. As a result, states paid an average of $0.20 per 13-ounce can of milk-based infant formula with a wholesale price of $2.60 to $3.57. However, while average rebates were high, rebate levels varied significantly by state. For example, in 2004, Virginia and South Carolina were paying as little as $0.07 per can of milk-based concentrate after rebates, while New York was paying $0.80 per can after rebates. Representatives of the three infant formula manufacturers identified several factors that influence how “attractive” they find a state contract. The attractiveness of a contract, in turn, could influence whether a manufacturer bids on a contract and the size of the rebate offered. Many of the factors cited by manufacturers are things over which states have at least some control. Shelf space and product placement: WIC-brand infant formulas may get more shelf space than competing brands, particularly in stores that serve areas with large concentrations of WIC participants. Because WIC participants purchase such a large share of infant formula in some stores, retailers tend to stock more of the WIC brand of infant formula. In addition to shelf space, WIC-brand products may be placed at eye level so that they are easy to spot. All three infant formula manufacturers noted the importance of shelf space and product placement to their marketing strategies. In addition, 31 of the 51 state WIC directors that responded to our survey felt that shelf space was moderately, very, or extremely important to manufacturers in determining how much they bid on an infant formula contract. While WIC agencies do not have direct control over shelf space and product placement, some include stocking requirements in their contracts with vendors. State policies regarding authorization of WIC vendors can also impact manufacturers’ access to non-WIC consumers. Some states have authorized WIC vendors that sell exclusively or primarily to WIC participants. Manufacturers have less access to non-WIC consumers if more WIC participants purchase their infant formula at these “WIC-only” stores. In those cases, retailers that serve the non-WIC population in the area may be less likely to focus on product placement or devoting shelf space to the WIC brand—two factors that benefit manufacturers in their drive to reach non-WIC consumers. Physician and Hospital Recommendations: Having WIC contracts could also benefit manufacturers through physician recommendations. State WIC programs often work with physicians to educate them about the program and the requirement that most WIC participants use the contract brand of infant formula. Physicians may decide to recommend the WIC brand of infant formula to all patients to avoid having to differentiate between those enrolled and not enrolled in WIC. Similarly, some hospitals agree to provide WIC-brand infant formula to new mothers so that they won’t have to switch infant formulas after they leave the hospital. It may be easier for hospitals to provide the WIC-brand infant formula to all new mothers. Moreover, two of the infant formula companies that participate in the WIC market are divisions of pharmaceutical companies that primarily market their products directly to physicians and hospitals while also marketing, though to a lesser extent, directly to consumers. States vary in the extent to which they emphasize doctor and hospital outreach. Product Innovation: All three manufacturers cited the central role of product innovation in their business strategies. Manufacturers seek to compete on the basis of product innovation and product quality despite the fact that infant formula is a relatively homogeneous product. Manufacturers said that certain state practices could make it more difficult to pursue their core strategy of innovation, and the development, distribution, and marketing of new products. These practices include the use of long-term contracts, requirements that manufacturers notify states of product changes in advance of introducing new products into the market, provisions that allow states to unilaterally extend contracts without requesting the consent of the manufacturer, state restrictions on the ways in which manufacturers market their infant formula, and restrictions on their interactions with physicians. While these state practices could inhibit innovation, many are put in place to protect states from increases in infant formula costs. State Billing Systems: All three manufacturers cited the accuracy of state billing systems as a key factor they consider when developing bids, and all stated that the vast majority of state billing systems need improvement. One manufacturer said that most states rely on antiquated information technology that is prone to costly billing errors. According to FNS officials, disputes over billing for infant formula rebates have long been a problem in the WIC program. In the past, some states requested reimbursement from infant formula manufacturers for every can of infant formula listed on redeemed vouchers. However, some WIC participants did not purchase every can of infant formula listed on the voucher. In these instances, the manufacturers claimed they were being billed for purchases that were never made. Partly in response to these disputes, a new provision was included in the Child Nutrition and WIC Reauthorization Act of 2004 that requires states to bill only for infant formula actually purchased. Contract size: State WIC directors said they believe that contracts that cover more infants yield higher rebates; however, the manufacturers said that the largest contracts may not draw their highest bids. A few state WIC directors expressed concerns that new provisions in the Child Nutrition and WIC Reauthorization Act of 2004 limiting the size of state coalitions and requiring separate contracts for milk-based and soy-based infant formula could reduce the size of contracts and the size of rebates. However, manufacturers noted that costly shifts in demand occur when very large states or coalitions change contractors. Manufacturers must be able to respond to these shifts by quickly increasing or decreasing production, and must, therefore, consider their own production capacity when they bid on very large contracts. Contract Provisions: Manufacturers noted that states sometimes include contract provisions that manufacturers consider complex, ambiguous, and extraneous and inclusion of these provisions could affect rebates. For example, manufacturers cited provisions that increase the potential liabilities of manufacturers, give states control over manufacturer activities, or require manufacturers to provide products or services not directly related to the sale of infant formula—such as sponsoring conferences, providing literature on nutrition education, or providing free infant formula—as particularly unattractive. Because states rarely modify contract provisions in response to manufacturer concerns, manufacturers may respond to these provisions by either not bidding on contracts or offering lower rebates. Provision of Powder, Concentrate and Ready-to-Feed Infant Formulas: All three manufacturers cited the importance of state policies governing the provision of powder, concentrate, and ready-to-feed infant formulas. Historically, the WIC program has issued more concentrate than powder, but there has been an increase in the use of powder in the WIC program since 2000. Because ready-to-feed infant formula is the most expensive, WIC regulations allow WIC agencies to provide it only in certain circumstances Twenty-nine states were able to provide us with data on their use of the different forms of infant formula in both 2000 and 2004. In 2000, 55 percent of all infant formula issued in those states was in the form of liquid concentrate. By 2004, liquid concentrate represented only a third of all infant formula provided to WIC participants in those states. Powder use may have increased because it can be more convenient for mothers who are partially breastfeeding because mothers can reconstitute small amounts of powdered infant formula at a time, whereas liquid concentrate must be diluted all at once. Manufacturers did not provide information on how the provision of different forms of infant formula might affect their bids on infant formula contracts. However, if concentrate is more profitable, the shift to powder could reduce manufacturer profits—and the rebates they offer to states. Alternatively, if powder is more profitable or there are no differences in the profitability of different forms, the shift to powder might not affect rebates. Although non-contract infant formula, including both exempt and non- contract, non-exempt infant formula, accounts for less than 10 percent of infant formula purchased through WIC, its use can have a significant impact on total infant formula spending because it can cost as much as 10 to 20 times more per can than rebated formula. Among the 27 states that were able to provide us with data on their use of contract, exempt, and non-contract, non-exempt formulas in both 2000 and 2004, use of exempt formula increased and use of non-contract, non-exempt formula decreased over the 4-year period. Figure 2 shows the average share of each type of formula provided to WIC participants in 43 states in 2004. Rebate savings have remained relatively stable since 1997 after adjusting for inflation, but if recent increases in the amount states pay for each can of infant formula they purchase through WIC continue, fewer participants will likely be served with rebates in the future. About a quarter of all WIC participants are served using rebate savings. However, over the past 5 years, the amount states pay for infant formula has increased somewhat, particularly among states that have awarded new contracts. There is some concern that if the price states must pay for infant formula continues to increase, fewer participants will be served using rebates. We estimated that in 2004, if all states had paid as much per can of infant formula as the two states with the lowest rebates, approximately 400,000 fewer children would have been able to enroll in WIC nationwide. After increasing substantially in the years prior to 1997, the total amount that states received from manufacturers in infant formula rebates has remained relatively constant since that time. As shown in figure 3, rebate savings have remained at about $1.6 billion per year after adjusting for inflation. The percent of participants that have been served using rebates has also remained relatively stable over the past 5 years, at about 25 percent. In 2004, some 2 million participants were served using rebate dollars. Although both total rebate savings and the share of participants served using rebate savings has changed little in recent years, we found that the amount states pay per can of infant formula, after taking rebates into account, has increased over the past few years. The average net price states paid per can of milk-based concentrate infant formula increased from $0.15 in fiscal year 2002 to $0.21 in fiscal year 2005 after adjusting for inflation. Because most contracts lock in the price states pay for infant formula for up to 5 years, average prices tend to move slowly. The price increases are more apparent among contracts that were newly awarded each fiscal year. Among newly awarded contracts, there was a fourfold increase in the average net price of a can of infant formula over the 3-year period, from $0.10 in 2002 to $0.43 in 2005. Figure 4 shows rebates for newly awarded contracts between fiscal year 2000 and fiscal year 2005. Although the amount states pay for infant formula varies by state, the increases in the average net price were not driven by large increases in just a few states, but reflect higher wholesale prices and lower rebates nationwide. Eight of the 9 states that implemented a new contract in 2002 did so at either the same net price as under their previous contract or at a lower net price. This trend shifted over the next few years. By 2004, a majority of states implementing new contracts saw their net price increase, and in 2005, every state that implemented a new contract did so at a higher net price than under its previous contract. The extent to which net prices increased among newly awarded contracts varied, but most states did not experience significant price increases. Among states that implemented new contracts in 2005, the average net price for a can of milk-based concentrate was $0.43 after rebates. This represented a discount of about 87 percent off the wholesale price. In a few states, however, the net price states paid for milk-based concentrate under their contracts was significantly higher than the average. New York implemented a contract in 2004 that provided milk-based concentrate for a net price of $0.80 per can, a discount of 75 percent off the wholesale price. Similarly, North Dakota implemented a contract in 2005 that provided a net price of 0.83 per can, a discount of 77 percent off the wholesale price. The impact of reduced rebates per can of infant formula was exacerbated by an increase in the use of more expensive types of infant formulas. Since the early 1990s, infant formula manufacturers have diversified their product lines to include a greater number of infant formula types, all of which have higher wholesale prices than traditional unenhanced milk- and soy-based infant formulas. The most significant change in infant formulas came with the introduction of DHA and ARA enhanced infant formulas starting in 2002. All three manufacturers have introduced DHA and ARA enhanced infant formulas at prices that are higher than the unenhanced versions. At the time of our survey in mid-2005, 23 states reported that they issue enhanced infant formula as their primary contract brand; only 8 states reported that they do not approve enhanced infant formula. In addition, four of the five most recent contracts to be awarded specified the enhanced infant formulas as the primary contract brand. The increased use of infant formulas with a higher wholesale price may have contributed to the increase in the net price of infant formula under new cost- containment contracts if infant formula companies sought higher compensation for their more expensive products. Increases in the cost of infant formula have not yet had a significant impact on the share of participants served with rebate savings, but if infant formula costs were to continue to increase, it is likely that fewer participants would be served using rebate savings in the future absent funding increases. To illustrate how infant formula prices can affect WIC participation, we considered how three different scenarios would have affected participation in WIC during fiscal year 2004. We estimated the number of WIC participants that would have been served using rebate savings if the average rebate in 2004 had been 90 percent of the wholesale price of infant formula—slightly less than the actual discount of 93 percent. We also considered scenarios reflecting larger reductions in rebates, to 85 percent and 75 percent of the wholesale price of infant formula. We compared our estimates to the actual number served using rebates in fiscal year 2004. We found that with even a modest reduction in rebates across all states, fewer participants could be served: If rebates were equal to 90 percent of the wholesale price in all states in 2004, about 70,000 fewer participants would have received WIC benefits. If rebates were equal to 85 percent of the wholesale price in all states in 2004, about 175,000 fewer participants would have received WIC benefits. If rebates had fallen to 75 percent of the wholesale price in all states in 2004, the program would have been able to serve about 400,000 fewer participants. Because most states are still under existing contracts negotiated in prior years, it would take some time for the impact of reduced rebates to be fully realized. Many states will be under their current contracts through 2006 and 2007 and many have contracts that continue through 2008 or 2009 if they opt to extend their contracts as permitted. State WIC directors in 45 of 51 states reported that their infant formula contracts allow for extensions ranging from 1 year to 4 years. However, if the recent decline in rebates continues, there could be an impact on the number of participants served using rebates within the next few years. While both states and FNS try to contain costs, FNS also works to sustain the competitive bidding system while states work to maximize their own savings. State WIC agencies have taken a variety of actions to promote cost containment. For example, some have joined coalitions or barred the provision of non-contract, non-exempt infant formula. Similarly, FNS promotes cost savings through a requirement that infant formula manufacturers provide the same percent discount for all infant formulas, even new infant formulas introduced when a contract is already in effect. However, FNS attempts to balance its efforts to promote cost containment with its larger goal of sustaining the competitive bidding system. For example, in some cases, FNS did not approve provisions in cost- containment contracts that would save states money if FNS believed these provisions would reduce competition. Through their infant formula contract bid solicitations, states have taken steps to promote cost containment. For example, by including provisions they believe manufacturers might find favorable or by omitting provisions they believe would have a negative impact on their rebate savings, states have sought to maximize their savings. As other examples: Thirty states have joined coalitions in an effort to share streamline the bidding process and leverage greater bargaining power when negotiating contracts with manufacturers. Some states allow contract extensions only if both the state and the infant formula contractor agree, rather than providing for unilateral contract extensions at state option. States have also taken steps to limit the amount they spend on non- contract infant formulas: Sixteen states purchase some or all of the exempt infant formula that is provided to participants directly from manufacturers or from low-cost providers. Twelve states do not provide more expensive non-contract brand, non- exempt infant formula to WIC participants, and nine states limit statewide use of non-contract, non-exempt infant formula to a specified share of all infant formula provided. In addition, 27 states limit the amount of time that non-contract, non-exempt infant formulas can be issued to participants. Eight states do not provide more expensive enhanced infant formula. Despite these state efforts to contain costs, opportunities remain for more states to further reduce the use of non-rebated infant formulas. Use of non-contract, non-exempt infant formula varies. Twelve states reported that they have policies in place not allowing the use of non-contract, non- exempt infant formula. Of the 43 states that provided complete information on their average monthly usage of contract, exempt, and non- contract, non-exempt infant formulas in 2004, 27 states reported that between 0.3 percent and 4 percent of infant formula provided is non- contract, non-exempt, and 8 states reported use between 4.5 percent and 9 percent. As required by the Infant Formula Act, infant formula is a relatively homogeneous product. Consequently, it is unlikely that large discrepancies among states in the use of non-contract, non-exempt infant formulas designed for use by healthy infants can be explained by differences in health conditions of infants receiving WIC infant formula in these states. There are also large discrepancies in the use of exempt infant formula designed to treat infants’ medical conditions. State provision of exempt infant formula ranges from a low of 0 percent to a high of about 23 percent. (See app. II for information on use of non-contract, non- exempt and exempt infant formula by state.) Again, as with non-contract, non-exempt infant formula, it is not clear how infants’ medical needs could vary so significantly among states. Figure 6 shows the minimum, median, and maximum use of each type of infant formula in 2004. Most states reported that they require participants to obtain medical documentation for non-rebated infant formula, as required by FNS. However, some local WIC directors said there are instances in which doctors do not diagnose a medical condition but still write a prescription at the request of the participant. Local WIC agency staff told us that they identify these cases only by confirming diagnoses with each physician which not all agencies have the resources to do. As a result, some WIC participants may be receiving non-rebated infant formula even though they do not have a medical condition requiring such infant formula. In its oversight capacity, FNS has helped states increase their cost- containment savings by providing technical assistance to states as they develop their cost-containment contracts, and by implementing regulations that help states achieve cost savings. For example, FNS requires manufactures to provide the same discount on all infant formulas after state costs rose with the introduction of a new infant formula. In 1993, one company introduced a lactose-free infant formula to accommodate infants with intolerance for milk-based infant formula. When this company introduced its lactose-free infant formula, it provided a significantly lower rebate amount on this infant formula than it provided on its milk-based infant formula covered by the existing contract, even though the wholesale price of the new infant formula was higher per can. As a result, states received a much lower discount on the new infant formula than they received on the original contract infant formula. At the same time, prescriptions for the lactose-free infant formula increased because the company marketed the infant formula directly to doctors. The cost savings states achieved through rebates began to erode. To maintain competition by ensuring that all manufacturers could bid on contracts and to help preserve rebate savings, FNS in 2000 established the requirement that infant formula manufacturers provide the same percent discount for all infant formulas, even those introduced when a contract is already in effect. The requirement that manufacturers provide the same percent discount for infant formulas introduced during a contract slows but does not completely stem increases in state spending on infant formula. Even with the requirement, states must still pay more for each can of newly- introduced infant formula when the wholesale price of the infant formula is higher. As a result, manufacturers still have a financial incentive to introduce and market more expensive infant formula because they charge a higher price per can. By 2003, all three manufacturers introduced new infant formulas enhanced with the fatty acids DHA and ARA. Like other newly-introduced infant formulas, these enhanced infant formulas were more expensive. Two state officials told us that the manufacturers replaced the milk-based infant formula the state was providing to WIC participants in some parts of the state with the enhanced infant formula. Retail outlets stopped stocking the original milk-based infant formula; as a result, states had to purchase the enhanced infant formula. In contrast, when manufacturers introduced new infant formulas in the past, states had a choice not to provide the new infant formulas or to limit their use because the original milk-based infant formula was still available. At least one state has since introduced a contract provision that requires manufacturers to charge the same price per can for newly-introduced products when those products replace the primary contract infant formula. Recognizing the history and dynamics of the infant formula market and the importance of competition to the cost-containment goals of the WIC program, FNS has attempted to ensure that all manufacturers can compete for state infant formula contracts regardless of their share of the market. Infant formula manufacturers operate within a highly concentrated industry. Beginning in the 1970s, three manufacturers—Wyeth-Ayerst, Mead Johnson, and Ross Laboratories—dominated the infant formula market. By the 1990s, the industry had shifted. Wyeth-Ayerst exited the domestic infant formula market in 1996. By then Nestlé had begun selling infant formula and had at least one state WIC contract. By 2000, Nestlé still had the smallest market share of the three companies. In order to ensure the sustainability of competitive bidding, FNS has not allowed provisions in cost-containment contracts that would have boosted state savings when FNS believed those provisions would reduce competition: For example, during negations FNS held with New England and Tribal Organization (NEATO) coalition from January 1995 through February 1996, FNS disapproved a contract provision that would have required that manufacturers demonstrate that they had the production capacity sufficient not only to meet infant formula contract commitments they had with NEATO, but also commitments they had made with other states and those to be awarded. NEATO included this provision, in part, because its contractor at the time had run out of infant formula, and, as a result, smaller states in the coalition did not have enough infant formula. FNS rejected the requirement that manufacturers prove they could fulfill contracts commitments with others states, which it viewed as intrusive and which it felt hindered fair and open competition. Similarly, in 2005, FNS did not approve a contract provision that would have allowed Wisconsin to continue with its current contractor if the bids it received differed by $10,000 or less. Under the provision, the state could have avoided expenses associated with switching contracts. FNS did not approve the provision because it believed it would give the current contract holder an advantage over the other two competitors. Several state WIC directors we interviewed questioned FNS’ role in the infant formula contracting process. Three state WIC directors pointed out that infant formula cost containment was initiated by states. In addition, a few state WIC directors noted that the manufacturers have stayed in the WIC market for years, even though some say that providing infant formula under these contracts is not profitable. FNS officials, however, pointed out that Wyeth stopped bidding on WIC contracts, and that competition would decrease if any other manufacturers stop bidding on WIC contracts. The Child Nutrition and WIC Reauthorization Act of 2004contained two provisions that promote competition among infant formula manufacturers by limiting the size of contracts and coalitions to ensure all manufacturers can bid on contracts regardless of their capacity: A requirement that states or coalitions serving more than 100,000 infants request separate bids for milk-based and soy-based infant formulas. Limits on the size of state coalitions. The act also addressed manufacturers’ concerns about how WIC agencies implement their infant formula contracts: A requirement that states provide participants, as the infant formula of first choice, the infant formula designated by the contract manufacturer as its “primary contract infant formula.” A requirement that manufacturers not only raise the rebates they provide to states in response to any increase in wholesale price, but also lower the rebates they provide to states by an amount equal to any decreases in wholesale price. A requirement that states accurately account for the number of cans of infant formula purchased, not just the number of vouchers redeemed at retailers. Although states developed and implemented measures to contain the cost of infant formula, FNS has played an increasingly important role in balancing the goals of containing infant formula costs and maintaining competition among infant formula manufacturers. FNS actions have not always maximized the cost savings of individual states, but by ensuring that all interested manufacturers can compete for WIC contracts, it has helped to ensure the long-run sustainability of the WIC cost-containment system. However, if manufacturers continue to emphasize a business strategy focused on innovation and product differentiation, WIC cost- containment savings are likely to erode further despite existing measures to protect state cost savings. By providing manufacturers with a higher per-can reimbursement for newly introduced, more expensive products, and by allowing states to issue non-contract, non-exempt infant formulas to participants with physician prescriptions, federal regulations encourage manufacturers to diversify their product lines and charge more for infant formula, even within a contract period. Unless FNS takes additional steps to safeguard rebate savings, total rebates could continue to erode and the number of participants who can be served by WIC will likely fall. To help states preserve rebate savings generated through infant formula cost containment and reduce costs associated with the purchase of non- rebated infant formula, we recommend that the Secretary of the Department of Agriculture take the following two actions: Consider providing additional guidance related to product changes so that state costs do not increase when infant formula manufacturers introduce new or improved infant formulas by encouraging all states to include in their contracts a provision that requires manufacturers to provide new and improved products marketed under a different name at the net wholesale price specified in the contract when the new product replaces the product the manufacturer designated as its “primary contract infant formula.” We recommend that the Secretary consider implementing a regulatory provision if necessary to ensure that states implement the guidance. Provide guidance or technical assistance to state agencies on ways to reduce the use of non-rebated infant formulas in states where use of these infant formulas is high. On March 13, 2006, we met with FNS officials to discuss their comments. The officials said they generally agreed with our recommendations. They stated that they will provide guidance related to product changes to assist state agencies in minimizing cost increases when infant formula manufacturers introduce a new infant formula that replaces the primary contract infant formula. Officials agree that a regulatory change would be necessary to require that state agencies include provisions in their contracts to accomplish this goal. They cautioned, however, that they have limited influence over the recent increases in infant formula costs attributable to manufacturers’ price and product changes. While we agree that FNS is constrained in its ability to affect manufacturer marketing and pricing decisions, we believe the agency should take any steps available to contain infant formula costs given the importance of cost-containment savings to serving as many eligible women, infants, and children as possible. Agency officials also stated that they will continue to provide guidance to state agencies related to the issuance of non-contract infant formula for those states where the use of these infant formulas appears high. However, officials expressed concern that some states may have misreported their use of exempt and non-contract, non-exempt infant formulas due to confusion over terminology and interpretation of the survey instrument. Officials noted that since some state agencies may require the same medical documentation for exempt infant formulas, non- contract, non-exempt infant formulas, and certain contract infant formulas other than the primary contract brand, some states may have misunderstood the distinction between the three types of infant formula we identified and misreported use of the different types of infant formula. We believe that the survey provided sufficient examples to allow states to distinguish between the three infant formula types we identified. We pretested our survey with officials in five states, in which we discussed their understanding of each question and the terms we used, and all of the officials we spoke with understood the differences between the categories of infant formula we identified. However, we acknowledge that because states are not required to track infant formula use by the categories we used or by the technical categories defined in the Infant Formula Act, our estimates of the use of the three types of infant formula may not be consistent across all states. Agency officials also expressed concern over the fact that we requested data for non-contract and exempt infant formulas by the average monthly percentage of total cans issued rather than by average monthly percentage of participation. Agency officials stated that some state agencies capture their data in terms of percentage of participation and this may have contributed to misreporting. In our discussions with state officials, we were told that because information on the number of cans provided through WIC is usually used to bill manufacturers for infant formula rebates, most states track the number of cans of infant formula provided to WIC participants. In 2003, GAO reported similar findings related to use of non-contract infant formula based on a survey that used different terminology and a different measure of use. The consistency between the findings in the two reports reinforces the ongoing importance of ensuring that states clearly understand the distinction between the different types of non-contract infant formula, monitoring the use of different types of infant formula, and providing technical assistance to state agencies where use of non-contract infant formula is high. Agency officials also provided technical comments, which we incorporated into the report where appropriate. This included revising data that had been provided by two states that had reported particularly high use of non-contract, non-exempt infant formula. We are sending copies of this report to the Secretary of USDA, relevant congressional committees, and others who are interested. Copies will be made available to others upon request, and this report will also be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202)512-7215 or fagnonic@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix provides a detailed description of the scope and methodology we used to determine (1) what factors influence program spending on infant formula, including the role of rebates that states receive through infant formula cost-containment contracts; (2) how the level of savings resulting from infant formula cost containment has changed over the past 5 years and the implications of these changes for the number of participants served; and (3) how federal and state policies and guidance have influenced state spending on infant formula. To assess what factors influence program spending on infant formula, including federal and state policies, we surveyed state directors of the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) in all 50 states and the District of Columbia. All 51 survey recipients responded to our survey, but not all respondents provided answers to every question. Where fewer than 51 responses were provided, we noted in the text the number of respondents on which the finding was based. We pretested the survey questionnaire with state WIC officials in five states. During these pretests, we administered the questionnaire and asked the officials to fill it out as they would if they had received it in the mail. After completing the questionnaire, we interviewed the respondents to ensure that the questions were clear and unbiased, the data we requested were feasible to collect, and the questionnaire did not place an undue burden on the agency officials completing it. To encourage respondents to complete the questionnaire, we sent one follow-up mailing containing the full survey instrument to nonrespondents approximately 3 weeks after the initial mailing and a second follow-up letter about 2 weeks later. We also conducted a review of literature on infant formula cost containment and spoke with officials from the U.S. Department of Agriculture’s (USDA) Food and Nutrition Service (FNS); state WIC directors in Illinois, Massachusetts, Minnesota, New Jersey, New York, Pennsylvania, and Washington; and local WIC directors from Belford, New Jersey; Chicago, Illinois; Odessa, Texas; Salt Lake City, Utah; and Tempe, Arizona. These state and local WIC directors were selected based on recommendations of the National WIC Association and FNS and represent several different geographic areas. We included states that belong to contracting coalitions as well as those that contract themselves, and those that had rebid their infant formula contracts recently as well as those that last rebid their contracts several years ago. We also interviewed individuals with expertise related to WIC and infant formula cost containment from the National WIC Association and the Center on Budget and Policy Priorities. In addition, we interviewed representatives from Nestlé, Mead Johnson, and Ross Products to understand the perspectives of the infant formula manufacturers. To assess trends in rebate savings and in the number of participants served with rebate savings, we analyzed administrative data we received from FNS on WIC program participation, food costs, and total rebates from 1990 through 2005. We assessed the reliability of the data by reviewing existing information about the data and the system that produced them and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We adjusted the rebate figures for inflation using the producer price index for pharmaceuticals. Infant formula is marketed in a way that is similar to pharmaceuticals. To estimate the impact of reduced rebates on the number of participants served with rebate savings, we used data we received from FNS on total rebates for 2004. This allowed us to estimate the share of infant formula spending that was spent on rebated infant formula and the share that was spent on non-rebated infant formula including exempt and non-contract, non-exempt infant formulas. We then held spending on non-rebated infant formulas constant, and estimated the reduction in total rebates that would have resulted if the average rebate states received through their cost- containment contracts in 2004 had been lower than the actual average discount of 93 percent of the wholesale price of milk-based concentrate. We considered three scenarios to represent recent trends in infant formula rebates, either nationwide or in individual states: A reduction in rebates from 93 percent of the wholesale price of infant formula to 90 percent of the wholesale price of infant formula, a modest decrease to rebate levels experienced by states in 2000. A moderate decrease to 85 percent of the wholesale price of infant formula, the average rebate on newly awarded contracts in 2004. A larger decrease to 75 percent of the wholesale price of infant formula, a decrease similar to that experienced by two states. To estimate trends in the per-can cost of infant formula, we also analyzed information we received from FNS on the rebates individual states received per can of milk-based concentrate infant formula from fiscal years 2000 to 2005. We used this data to calculate trends in state infant formula costs. Kay Brown, Assistant Director; Carol Bray; Kathryn Larin; Lise Levie; Lynn Milan; Marc Molino; Luann Moy; Susan Pachikara; Tovah Rom; and Daniel Schwimer made significant contributions to this report. Breastfeeding: Some Strategies Used to Market Infant Formula May Discourage Breastfeeding; State Contracts Should Better Protect Against Misuse of WIC Name. GAO-06-282. Washington, D.C.: February 8, 2006. Means-Tested Programs: Information on Program Access Can Be an Important Management Tool. GAO-05-221. Washington, D.C.: April 11, 2005. Nutrition Education: USDA Provides Services through Multiple Programs, but Stronger Linkages among Efforts Are Needed. GAO-04-528. Washington, D.C.: April 27, 2004. Food Assistance: Potential to Serve More WIC Infants by Reducing Formula Cost. GAO-03-331. Washington, D.C.: February 12, 2003. Food Assistance: WIC Faces Challenges in Providing Nutrition Services. GAO-02-142. Washington, D.C.: December 7, 2001. Food Assistance: Research Provides Limited Information on the Effectiveness of Specific WIC Nutrition Services. GAO-01-442. Washington, D.C.: March 30, 2001. FNS: Special Supplemental Nutrition Program for Women, Infants and Children: Requirements for and Evaluation of WIC Program Bid Solicitations for Infant Formula Rebate Contracts. OGC-00-65. Washington, D.C.: September 18, 2000. Food Assistance: Financial Information on WIC Nutrition Services and Administrative Costs. RCED-00-66. Washington, D.C.: March 6, 2000. Food Assistance: Efforts To Control Fraud and Abuse in the WIC Program Can Be Strengthened. RCED-99-224. Washington, D.C.: August 30, 1999. Food Assistance: Information on WIC Sole-Source Rebates and Infant Formula Prices. RCED-98-146. Washington, D.C.: May 11, 1998. Food Assistance: Information on Selected Aspects of WIC. T-RCED-98- 128. Washington, D.C.: March 17, 1998. Food Assistance: WIC Program Issues. T-RCED-98-125. Washington, D.C.: March 17, 1998. Food Assistance: A Variety of Practices May Lower the Costs of WIC. RCED-97-225. Washington, D.C.: September 17, 1997.
The Special Supplemental Nutrition Program for Women, Infants and Children (WIC) provides food, nutrition education, and health care referrals to close to 8 million low-income pregnant and postpartum women, infants, and young children each year. About a quarter of these participants are served using rebate savings from contracts with infant formula manufacturers. WIC is administered by the Department of Agriculture's Food and Nutrition Service (FNS). To better understand infant formula cost containment, this report provides information on: (1) factors that influence program spending on infant formula, (2) how the level of savings resulting from infant formula cost containment has changed and the implications of these changes for the number of participants served; and (3) steps federal and state agencies have taken to contain state spending on infant formula. Rebates drive state spending on infant formula but use of non-rebated formula increases state costs. In fiscal year 2004, states paid an average of $0.20 per can for milk-based concentrate formula, a savings of 93 percent off the wholesale price. However, states also allow some use of non-rebated formula that can cost states more than 10 times as much as contract formulas. For example, in 2004, 8 percent of infant formula provided to WIC participants was non-rebated. Rebate savings from infant formula cost-containment contracts have allowed WIC to serve an additional 2 million participants per year, but recent increases in the cost per can of formula could lead to reductions in the number of participants served with rebates. Rebate savings have remained near $1.6 billion per year since 1997 after adjusting for inflation, but the amount states pay per can of infant formula has increased since 2002. We estimated that in 2004, if the cost per can of formula increased in every state by as much as it did in two states, approximately 400,000 fewer participants would have been able to enroll in WIC nationwide. State and federal agencies have both taken steps to contain WIC infant formula costs, but FNS also focuses on sustaining the cost-containment system. States have sought to increase their costs savings through their infant formula contracts--for example, by joining coalitions to leverage greater discounts. Some also try to restrict the use of the more expensive non-contract formulas. FNS, in turn, helps states to contain costs through its review of contracts and through policy and guidance. For example, FNS reduced--but did not eliminate--the price increases that can result from the introduction of new, more costly formulas. FNS has also used its oversight authority to ensure that all interested manufacturers can compete for state infant formula contracts in an effort to maintain the long-run sustainability of the infant formula cost-containment system.
One way for DOD to minimize the cost and time needed to modify weapon systems is by using an open systems approach for system design and development. An open system allows system components to be added, removed, modified, replaced, or sustained by consumers or different manufacturers in addition to the manufacturer that developed the system. It also allows independent suppliers to build components that can plug into the existing system through the open connections. Fundamental elements of an open systems approach include: designing a system with modular components that isolate functionality. This makes the system easier to develop, maintain, and modify because components can be changed without majorly impacting the remainder of the system. developing and using open, publicly-available standards for the key interfaces, or connections, between the components. Interface standards specify the physical, power, data, and/or other connections between components. All interfaces in a system do not need to use open standards for a system to be considered “open,” and it can be costly and impractical to manage hundreds or thousands of interfaces within a system. Rather, open standards should be identified at key interfaces between the modules that are likely to change, may frequently fail or need to be replaced, or are needed for interoperability. obtaining data rights to interfaces when open standards are not available. DOD describes the acquisition of technical data, such as design drawings, specifications, and standards, as critical to enabling the department opportunities for competition for modification and sustainment of weapon systems throughout their life cycles. Many consumer products, including U.S. appliances and smartphones, are considered to be open systems because they use widely-available hardware and software standards at key interfaces. For example, U.S. appliances are designed to use a particular wall socket standard, so that they can plug into any power outlet without consumers needing to worry about which brand product is compatible in their homes. Similarly, headphone jacks use a common, open standard, which enables consumers to purchase headphones made by different manufacturers, and plug them into many types of devices, including their MP3 players, cell phones, and stereos, which may be built by other manufacturers. In addition, the AndroidTM operating system, used on smartphones, allows individual software applications introduced by third-party developers to connect to the Android operating system through an open software interface. This allows customers to have a lot of choices and helps keep prices low. Incorporating an open systems approach prior to the start of development increases the likelihood that open systems considerations are included in program requirements and inform a program’s future competitive strategy, which can significantly reduce upgrade and maintenance costs later. Introducing this approach later in a program’s life cycle, such as a planned modification or upgrade, is more difficult, complex, and costly to do as it may require significant modifications to an already-developed system. Legacy DOD programs that we have reported on in the past did not implement an open systems approach early and are now facing difficult and costly system modifications later in their respective service lives. The B-2 bomber and the C-130 aircraft are two examples: The Air Force is spending over $2 billion to upgrade the B-2 bomber’s communications, networking, and defensive management capabilities. Because the B-2 program’s prime contractor is the sole system integrator in possession of proprietary technical data and software, there is no opportunity for competition—a critical tool for achieving the best return on investment and driving down costs. The Air Force planned to replace the aging avionics systems on its C-130 aircraft with open architecture avionics systems. However, the individual cockpits had been modified throughout years of maintenance and differed from one another to such an extent that the upgrades required custom-built hardware and software; less expensive commercial off-the-shelf hardware and software could not be used. Because of the various configurations, C-130 modernization cost estimates increased from $4 billion to over $6 billion and the Air Force significantly reduced the number of planes it planned to modernize from 519 to 221 between fiscal years 2001 and 2011. The Air Force terminated the program in its fiscal year 2013 budget submission because of its escalating costs. DOD understands the efficiencies of open systems and has taken steps to incorporate an open systems approach into its policy over the past two decades. As far back as 1994, the Under Secretary of Defense for Acquisition and Technology directed the use of an open systems approach in the acquisition of weapon system electronics. That same year, DOD established an Open Systems Joint Task Force to oversee the military services’ transition to open systems and coordinate open systems-related efforts, such as training and standards development. The task force developed several tools to aid program managers with this approach, but was later disbanded. In 1998, the Defense Science Board cited pockets of success in leveraging open systems, but noted that DOD lacked a unifying concept and required a reconfiguration of management processes and aggressive leadership to facilitate open systems implementation. This concern was addressed in DOD’s 2000 acquisition directive and in its most recent update in 2003, which states that “a modular, open systems approach shall be employed, where feasible.”DOD’s 2008 acquisition instruction also provides that “program managers shall employ a modular open systems approach to design for affordable change, enable evolutionary acquisition, and rapidly field affordable systems that are interoperable.” Recently, the Under Secretary of Defense for Acquisition, Technology and Logistics has placed renewed emphasis on an open systems approach as part of the 2010 and 2012 Better Buying Power initiatives to increase efficiency in defense spending through effective competition. Specifically, the initiatives aim to improve the department’s early planning for open architectures, including making open systems considerations part of the design process. An open systems approach has great potential to generate efficiencies for product manufacturers, lower total ownership costs for consumers, and transform industries because it spurs industry growth, competition, and innovation. To get the most from open systems, the approach is best implemented at the start of product design because this is where initial modularity, key interface, and data ownership decisions are made and would result in costly redesign if implemented later. Open systems have been successfully used by commercial companies, such as those in the personal computer and software markets, and has helped customers avoid sole source prices and manufacturers address obsolescence and diminishing resource issues. DOD has also used open systems to some extent and reports that open systems have resulted in cost and schedule efficiencies in the development and upgrade of some of its acquisition programs. Office of the Under Secretary of Defense, Acquisition, Technology and Logistics Memorandum: “Better Buying Power: Guidance for Obtaining Greater Efficiency and Productivity in Defense Spending” (Sept. 14, 2010). Office of the Under Secretary of Defense, Acquisition, Technology and Logistics Memorandum: “Better Buying Power 2.0: Continuing the Pursuit for Greater Efficiency and Productivity in Defense Spending” (Nov. 13, 2012). technologically advanced products. In addition, the capability to accept plug-in components results in reduced development time and costs for manufacturers because parts are easier to integrate. Furthermore, upgrades and repairs take less time and are less costly for consumers. Finally, when different products adhere to the same standards, an open system design enhances interoperability among products. An open systems approach can also spur industry growth and entrepreneurial creativity, transforming an industry and offering benefits to manufacturers, suppliers, and consumers. A prime example where this has occurred is in the personal computer industry. In 1981, International Business Machines (IBM) Corporation introduced its personal computer, which was designed as an open system. IBM used already existing components, including the monitor from another IBM computer, and commercially-available off-the-shelf parts such as software, floppy drives, and an Intel processor. Upon releasing the personal computer, IBM openly published its hardware and software specifications, allowing other manufacturers to develop compatible software and peripheral hardware, such as the monitor, keyboard, and mouse. Figure 2 illustrates the personal computer as an open system, made up of compatible components that can be produced and integrated by different manufacturers. Since the time IBM publicly released the specifications of its personal computer, the market has grown exponentially in terms of manufacturers developing computers and related devices such as printers and scanners, third-party suppliers developing software applications that can be used on the computers, and consumers purchasing computers, software, and peripherals. Increased competition and technological innovation brought on by the use of an open systems approach, among other things, has helped make computers affordable to consumers. For example, according to IBM, one of its predecessors to the personal computer sold for $90,000; its first personal computer was sold in retail stores for $1,565.In the present marketplace, multiple computer manufacturers are developing computers that have 500 times the processing power of IBM’s early personal computer and sell for as little as $400. The personal computer industry has evolved over the past three decades to meet consumer demand and to leverage new technologies developed by the large number of manufacturers and suppliers competing in the marketplace. For example, in the mid-1990s, a group of companies, including Intel, Microsoft, and Hewlett-Packard, developed the Universal Serial Bus (USB) interface standard to reduce the number of different connectors, such as parallel and serial ports, that were required to allow various components to work together. The USB standard is now widely used in various devices beyond computer peripherals, including cell phone chargers and music players. Other standards, such as the High- Definition Multimedia Interface have been developed to address the need for standards for high-definition televisions and computer monitors. Perhaps the biggest transformation in the computer industry has been the Internet, with an estimated 2.7 billion users worldwide. communications technology is supported by open, non-proprietary standards which allow consumers to have easy access to information on a wide variety of topics, music, shopping opportunities, online banking, friends, their workplace, and more. In 2013, the International Telecommunication Union, a United Nations agency, estimated that there are 2.7 billion worldwide Internet users. number of modernized submarines. In 2009, the Navy reported reductions of: 17 percent in program development and production costs, 13 percent in operating and support costs, and 80 percent in development time—developing and installing the sonar on the first ship 2 years after the program started, as opposed to a decade or more. The Navy also used an open system architecture to upgrade its Virginia- class submarine program to counteract the effects of obsolescence and ensure the submarines have the capability to respond quickly to changing missions and threats. In 2010 the Navy reported $96 million in cost avoidance since the upgrade program’s inception in 2001. In addition, in 2008, Congress directed DOD to develop a strategy for commonality and standardization in UAS ground control station architecture. As part of a joint service/industry effort, the UAS Task Force developed a common, open ground control station architecture, called the UAS Control Segment (UCS), which can be integrated into any UAS program’s ground control station. The UCS architecture enables the reuse of individual software applications across different types of ground control stations, such as those for weather or vehicle status, thus eliminating the need to redevelop the applications for each new ground control station. The first version of the UCS architecture was released in 2010, and the ability to share software applications was first made available to UAS programs in January 2013. OSD and service officials point to several benefits from using UCS software. For example, one task force official stated that a demonstration of the UCS architecture in 2010 showed that the average time to integrate 10 applications was less than 2 weeks; significantly shorter than the traditional integration time for a new ground control station capability, which can be about 1 year. DOD also estimates that competition for software applications can eliminate up to 90 percent of their cost. In addition, the architecture will enable rapid integration of new aircraft sensors, weapons, and other payloads. The services vary in the extent to which they have adopted open systems for DOD’s 10 largest UAS, with the Navy leading the other services. Three of the Navy’s four current and planned UAS programs incorporated, or are planning to incorporate, an open systems approach from the start of development in key components of their UAS—the air vehicle, ground control station, and payloads (i.e., cameras and radar sensors). Conversely, none of the Army or Air Force UAS programs incorporated the approach from the start of development because, according to Army and Air Force officials, legacy UAS programs tried to take advantage of commercial off-the-shelf technology or began as technology demonstration programs. Several of these programs (4 of 6) are starting to incorporate the approach, primarily for the ground control station of fielded aircraft during planned upgrades. For example, the Army did not initially include an open systems approach for its three UAS programs, but has since developed a universal ground control station with open interfaces that each of its programs will use. None of the Air Force’s three UAS programs were initially developed as an open system, and only one is being upgraded to include an open systems approach. Each of the programs that have adopted an open systems approach expects to achieve cost and schedule benefits, such as reduced upgrade costs and quicker upgrade times. Figure 3 identifies the UAS programs that introduced an open systems approach at the start of development for the air vehicle, ground control station, and payloads. Three of the Navy’s four current and planned UAS programs—the Small Tactical UAS (STUAS), Triton, and Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS)—which are less than 5 years old, included or are planning to include an open systems approach from the start of development for the key components of their systems. The Fire Scout program, which began in 2000, initially incorporated an open systems approach for its ground control station and later included open system elements in its air vehicle and payloads during an upgrade effort. The Navy expects significant benefits in return, such as reduced development and integration time and costs, as well as increased future competition for new system payloads. Since these systems, for the most part, either have not been fielded or have not begun development, program officials were only able to provide us with estimates of potential savings. The STUAS program, which started in 2010, made modularity a key requirement for the system, as documented in its acquisition strategy and systems engineering plan, and required the contractor to provide the program office with rights to key interfaces in the development contract. For example, the STUAS program owns the specifications for the data link between the air vehicle and ground control station, as well as the specifications for the payload interfaces. Officials noted that these elements allow the program to compete contracts for both the air vehicle and ground control station in the future for upgrades or to meet changing requirements. In addition, program and contractor officials noted that by having the rights and specifications to the payload interfaces, the program will be able to integrate and test third-party designed payloads within a matter of days or months, as opposed to years typically required to test new system payloads. Program officials also anticipate that they will be able to independently integrate at least 32 different payloads developed by 24 different manufacturers. The Triton program, which started in 2008, developed an open systems management plan and described in detail the steps it plans to take to achieve an open system in its acquisition strategy and systems engineering plan. The Triton includes both hardware and software modularity, which gives the program the ability to integrate new payloads and introduce software upgrades without affecting the rest of the system. For example, officials noted that the system’s camera is a separate subsystem, which makes it easier to upgrade the capability individually, and the flight control software is separated from the rest of the system’s software, resulting in less required testing when new capabilities are added. Program officials estimated that software testing could be reduced by as much as 66 percent compared to systems that do not have an open system design, in part because the program does not have to test the entire system when introducing new software or upgrades. In addition, the program anticipates that the open system design will facilitate increased competition among payload suppliers, which could result in lower prices and better payload capabilities. The Navy is planning to use an open systems approach for its future UCLASS program. The program has identified and designated key system interfaces and according to program officials, it plans to require contractors to use particular open standards, such as an open payload interface for electro-optical/infrared sensors, which is expected to reduce acquisition costs and simplify integration, as well as an open avionics architecture. The Navy developed the architectures and provided them to industry beginning in 2012. Officials developing the avionics architecture estimated that development and integration costs for multiple platforms using the avionics could be reduced significantly, as compared to using a closed avionics architecture. The Fire Scout program included an open systems design for its original ground control station when it began development in 2000, but the rest of the system remained proprietary. However, when the program upgraded from the MQ-8A to the MQ-8B air vehicle in 2004, it secured the data rights to key system interfaces and introduced modularity into the air vehicle and payloads. As a result of a new modular software design, officials noted that payload data links are separated from the flight control software and other mission systems, which is expected to cut the time needed to test software and integrate a payload by about half. For example, program officials said they were able to integrate a new radar in only 18 months; officials estimated this integration previously would have taken 3 years. The Army and Air Force did not originally plan to make the air vehicle, ground control station, or payloads of their UAS programs open systems. Instead, program officials stated that they relied on commercial off-the- shelf systems and technology demonstration programs that prime contractors developed and then modified and upgraded over time, without the benefit of competition that could help keep costs low. Over the past several years, the Army has taken steps to make the ground control stations open systems for all three of its fielded UAS and the payload interfaces open in two of those systems, while the Air Force plans to make the ground control station an open system for one of its three UAS. The Army’s three UAS programs—Hunter, Shadow, and Gray Eagle— were all initially developed as proprietary systems and did not include an open systems approach for all three key components—the air vehicle, ground control station, and payloads. Moreover, the Army’s UAS ground control stations limited interoperability and resulted in the Army paying for ground control stations that provided similar capabilities. The Army eventually developed a common ground control station for the three UAS; however, the new station was still proprietary. All three of the Army’s UAS programs are now upgrading to a universal ground control station that incorporates an open architecture to address obsolescence issues and increase interoperability. According to Army program officials, ground control stations require continuous hardware and software upgrades as the technology becomes obsolete. Even though an open systems approach is being incorporated later in the programs’ life cycles, officials believe the benefits—reduced obsolescence issues, reduced upgrade costs, and increased interoperability—outweigh the costs. For example, the Army’s new universal ground control station will give Army operators in the field the ability to fly Hunter, Shadow, and Gray Eagle from one ground control station. This was not possible with the Army’s legacy ground control stations that did not use open architectures. The Army’s Gray Eagle and Shadow programs are also incorporating open system elements for the systems’ payloads to save integration time and cost. For example, the Shadow program modified its air vehicle in 2008 to include a universal payload pod that allows any vendor that can meet the standard power and data connections to design a payload to be integrated onto the system. Program officials noted that, to date, four third-party vendors have used the universal pod for their products. Similarly, Gray Eagle program officials stated that the program owns the data rights to all payload interface specifications, which allows third-party vendors to develop a payload using these specifications. However, officials noted that the program has to rely on the prime contractor to integrate new payloads because the Army does not have the expertise to do so. The Air Force has had limited success in modernizing its UAS to include open systems. For example, the Reaper plans to upgrade to an open ground control station, but the remainder of the system remains proprietary. The other two programs—the Predator and Global Hawk— included language in their planning documents stating their intention to introduce open system elements later in their respective life cycles. However, Predator’s age and Global Hawk’s fiscal constraints prevented them from adopting an open systems approach. As a result, the two systems remain largely proprietary and are now facing challenges sustaining and upgrading their systems. The Predator program began in 1994 as an advanced concept technology demonstration program and is one of the oldest systems in DOD’s UAS portfolio. Program officials stated that the Predator’s software is not modular and the program has no intention of modifying the software because the Air Force is planning to divest itself of Predator aircraft once more Reapers are fielded. Predator officials also noted that sustainment and obsolescence challenges remain a risk area for the program. Officials from the Global Hawk program, which started development in 2001, also stated that obsolescence is a major problem for that program, particularly for the ground control station. The program recently had planned to develop a new ground control station that utilized an open systems architecture. However, the Air Force cancelled the upgrade effort in 2013 due to what program officials described as fiscal constraints, even though it plans to use the aircraft through at least 2032. The Air Force is now planning to continue to maintain the legacy Global Hawk ground control station and communications system, although contractor obsolescence and other sustainment challenges contributed to a critical Nunn-McCurdy cost breach in 2011. Policies and leadership support are two fundamental ways of getting DOD’s weapon acquisition community to shift from a culture of relying on contractors to provide proprietary systems to one where systems are designed to be open. In general, policies are needed to establish the overall plan and acceptable procedures that guide acquisition strategies, and leadership is needed to champion policies and new initiatives, as well as ensure that technical expertise is available to implement them. DOD has cited a preference for acquiring open systems in its policies since 1994 and most recently in its Better Buying Power initiative, which requires programs to outline an approach for using open systems architectures at milestone B—the start of development. The Navy developed an open systems policy in 2005 and its UAS programs largely followed that policy from the start of development. The Air Force and Army also have open systems policies, but their UAS programs did not implement them from the start of development and, because of this, some programs are now having difficulties upgrading their systems. Further, while DOD leadership is placing a renewed emphasis on an open systems approach through its Better Buying Power initiatives, it is not currently tracking the extent to which weapon acquisition programs are adopting open systems across the department or whether program offices have enough technical expertise to effectively implement an open systems approach. The Naval Air Systems Command has taken steps to ensure that its UAS program offices have the expertise to implement open systems by establishing a small group of experts that can assist program offices with developing their technology development and acquisition strategies with open systems in mind, but other program offices may lack their own expertise or access to similar expertise. Although DOD acquisition policy directs program managers to employ a modular, open systems approach for acquisition programs to minimize life-cycle costs, the Navy is the only service that did so while developing its UAS programs. Based on our review of UAS programs, the Navy leads the other services in adopting an open systems approach for its acquisition programs, and Navy program officials cited their service’s open systems policy as the driver for adoption. The Navy’s policy required its programs to incorporate open architecture principles into program requirements beginning in 2005, which means that UAS programs would include an open systems approach as part of their acquisition strategies prior to the start of program development. The Navy also established an oversight and reporting structure to provide reasonable assurance that its programs are following this policy. For example, the Navy assigns overall responsibility and authority for directing the effort to one office, the Deputy Assistant Secretary of the Navy for Research, Development, Test and Evaluation. This office is tasked with defining an overarching strategy, providing systems engineering leadership to other Navy acquisition offices, and overseeing and reporting on implementation efforts. The Air Force and Army also have policies that require open systems to be included in a program’s acquisition strategy, which would occur prior to the start of program development. However, none of their UAS programs incorporated an open systems approach at the start of development, including a program that began after the policies were issued. Instead, these services favored off-the-shelf systems or those that could be quickly fielded. In addition to not including open systems during the initial design of a program, these service officials stated that they are having some difficulties acquiring the necessary funding to make their legacy systems open as they are being upgraded and we found they are not consistently implementing the new UCS architecture developed for all ground control stations. For example, the Global Hawk program planned to use an open systems approach for its ground control station to address obsolescence issues. However, as mentioned earlier, the Air Force terminated the effort because of funding constraints, even though it plans to use the aircraft through at least 2032 and the ground control station will require upgrades and costly support during this time span. In addition, the Air Force’s Global Hawk and Predator and the Navy’s STUAS do not plan to adopt the recently developed UCS architecture at this time. The Global Hawk had planned to incorporate UCS as part of its now cancelled upgrade effort and Predator officials stated that the Air Force is not investing in upgrades, including UCS. Further complicating the UCS architecture effort is the Air Force, which developed a “complementary” ground control station architecture without the involvement of the other services or OSD. The Air Force plans to use both architectures on the Reaper system. Officials explained that they developed the complementary architecture because the Air Force wanted to be able to attain the capability faster. However, both architectures will be fielded simultaneously on the Reaper and contractors developing UAS ground control station software stated that adopting both architectures will be more costly for DOD. To improve implementation of an open systems approach, DOD’s 2010 Better Buying Power initiative now requires programs, at milestone B, to outline an approach for using open systems architectures and acquiring technical data rights to ensure sustained consideration of competition in the acquisition of the weapon systems. The approach is to be based on a business case analysis and engineering trade analysis. DOD recognizes that it needs to increase its leadership efforts to implement open systems for its weapon acquisition programs, but does not know the extent to which programs are implementing this approach or if the services have the requisite technical expertise. According to federal internal control standards, agencies should monitor programs to ensure that policies are being implemented, such as by establishing and tracking program metrics, and also ensure that their workforce has the required skills to implement policies. Thus, DOD should monitor open systems implementation and also assess whether program offices have the expertise to carry out the open systems policies. Various independent standards experts, DOD contractors, and DOD systems engineering officials identified DOD leadership as key to effective adoption of an open systems approach. Several of these officials cautioned that prime contractors may be resistant to providing open systems because they are able to achieve greater financial benefits by selling DOD proprietary products, which they alone can integrate, upgrade, and maintain. Specifically, these officials believe that successful implementation of an open systems approach requires that DOD provide clear and consistent direction for its approach and that programs plan for an open systems approach prior to the start of development. This includes defining the open systems approach in a program’s technology development and acquisition strategies prior to initiating the technology development phase at milestone A and the start of engineering and manufacturing development at milestone B in DOD’s acquisition process, respectively. Further, they believe that contracts and requests for proposal should include appropriate language that describes the open systems architecture, defines open standards, and establishes requirements for control documents to ensure the government retains rights for the identified key interfaces. Acquiring a new weapon system using an open systems approach could be more expensive at the beginning, particularly regarding ownership of data rights. Yet, the potential benefits of obtaining the data rights, in terms of greater competition and reduced upgrade and repair costs, are widely believed to significantly outweigh the costs, especially if done from the start of development. DOD could avoid purchasing data rights if it works with industry to develop standards. However, in these cases, standards organizations’ officials we spoke with said that DOD would have to work with standards organizations several years in advance to ensure that the standards are available at the start of development. The Better Buying Power initiative for open systems is one of DOD’s efforts to increase competition and drive efficiency in acquisitions. While competition is difficult to do at the system level because there are a limited number of prime contractors that develop DOD weapon systems, an open systems approach can boost opportunities for competition at the subsystem level. As part of the Better Buying Power initiatives, the Under Secretary of Defense for Acquisition, Technology and Logistics formed an Open Systems Architecture Data Rights Team co-led by the Office of the Deputy Assistant Secretary of Defense for Systems Engineering (SE) and the Deputy Assistant Secretary of the Navy for Research, Development, Test and Evaluation. The team is developing guidance, tools, and training in an effort to gain more momentum for the use of an open systems approach and the acquisition of appropriate data rights across DOD. This includes a contract guidebook to be issued in the summer of 2013 intended to assist program managers in incorporating open systems principles into their acquisition programs. Officials from two of the UAS programs we reviewed, UCLASS and Triton, told us they had used a draft version of this guidebook to help inform their contracting strategies. The team also updated the systems engineering guidance in the Defense Acquisition Guidebook to recommend that program managers update their technology development strategy, acquisition strategy, and systems engineering plan throughout their program’s life cycle to reflect their respective open systems strategies. Finally, the team is leveraging other resources previously developed by the Open Systems Joint Task Force, such as an analytical tool for programs to monitor and evaluate their open systems implementation and a program manager’s guide intended to assist program offices in integrating an open systems approach into their acquisition strategies for new and legacy systems. According to officials from the Open Systems Architecture Data Rights Team, the team is required to report to the Under Secretary of Defense for Acquisition, Technology and Logistics on its progress in meeting initiatives related to Better Buying Power by October 2013, but that reporting has not yet begun. The Deputy Assistant Secretary of Defense for Systems Engineering indicated that several new programs are incorporating an open systems approach during early planning. For example, the Navy’s Next Generation Jammer included open systems language in its 2012 request for proposal. Additionally, the Army’s Ground Combat Vehicle program and the Air Force’s Space Fence program both considered open systems as part of their future design prior to or during technology development. However, it is unclear the extent to which programs are adopting an open systems approach across the department because this information is not being tracked. In addition, it is unclear whether the services have sufficient technical expertise to implement an open systems approach and enable DOD to be a savvy buyer of weapon systems. Program offices need open systems expertise to conduct systems engineering activities, even before a contract is awarded, to be able to determine how an open systems strategy fits into program requirements; identify which system interfaces are key to maintaining competitive opportunities; ensure sufficient data rights are obtained to support the competitive strategy; translate open system requirements into contracts; address any risks associated with the open systems approach; and finally, validate that the contractor is providing the open system that the program required. However, officials from 11 offices/companies we spoke with, including a DOD contractor; the UAS Task Force; 6 of 10 Air Force, Army, and Navy UAS programs; the Naval Air Systems Command technical experts; the DOD Open Systems Architecture Data Rights Team; and an open systems expert who consulted with DOD on an open systems approach, stressed that DOD does not have adequate expertise across the department to effectively implement an open systems approach. The Defense Acquisition University (DAU), the organization responsible for training the acquisition workforce DOD-wide, offers training on open systems as part of its core acquisition curriculum required for both engineers and program managers. However, officials from every program we talked to who had taken this training told us that the training is only sufficient to familiarize participants with the concept of open systems, but not in-depth enough to allow participants to implement the approach. DAU is developing additional training resources in support of open systems to be rolled out beginning fall 2013. This new training, according to a DAU official, will be geared toward providing program managers and data management personnel information on data rights. Open systems will be discussed, but primarily to give the participants awareness of the approach. There are currently no plans for new in-depth training focused on an open systems approach. Some of the material may be added to the core acquisition curriculum, but officials said this transition can take years. As required by the Weapon Systems Acquisition Reform Act of 2009, the Deputy Assistant Secretary of Defense for Systems Engineering is responsible for reviewing the systems engineering capabilities of the military departments and identifying areas that require changes or improvements. To meet this statutory requirement, SE requests that the Air Force, Army, and Navy submit a self-assessment of their systems engineering capabilities, which SE reviews to identify needed changes or improvements to the services’ capabilities. SE conducts this review annually and publishes its findings in annual reports to Congress, most recently in March 2013. However, the services have not provided an analysis of their open systems capabilities as part of their self- assessments. Therefore, the Deputy Assistant Secretary cannot be sure that program offices have adequate systems engineering resources to effectively implement an open systems approach. The Naval Air Systems Command has a small group of open systems technical experts that its acquisition programs can use when developing their open systems approach. Officials from the UCLASS, STUAS, and Triton programs said they relied on experts from this team to assist them in planning and executing their open systems approach, including developing a request for proposal, selecting open standards, and responding to contractor bids. UCLASS program officials also said they received assistance from the Deputy Assistant Secretary of the Navy for Research, Development, Test and Evaluation—the Navy’s lead office for open systems—when developing its request for proposal. The adoption of an open systems approach in DOD acquisition can provide significant cost and schedule savings for DOD. Based on projections from several Navy UAS programs, for example, the Navy could avoid considerable repair and upgrade costs on individual programs, as well as improve performance, by incorporating an open systems approach on programs prior to the start of development. This is because multiple suppliers can compete to quickly replace key components on the UAS with more capable components. Traditionally, DOD has acquired proprietary weapon systems that limit these opportunities and make these systems more costly to develop, procure, upgrade, and support. DOD has cited a preference for acquiring open systems in its policy since 1994 and each of the services have since issued open systems policies. Yet, we found the Army and Air Force have been slow to make their UAS open systems, particularly from the start of development. The Navy, on the other hand, has generally designed its UAS to be open from the start of development where it can reap the most benefits. To successfully change the services’ inclination toward buying proprietary systems and shift towards the acquisition of open systems, DOD needs to address policy and leadership challenges. Moving to open systems requires strong leadership to overcome preferences for acquiring proprietary systems. While DOD’s Better Buying Power initiative requires programs to outline an approach for using open systems architectures at milestone B, OSD does not have adequate insight of the extent to which an open systems approach is being used by individual weapon acquisition programs. Further, OSD does not know if program offices have the systems engineering expertise required for effective implementation of an open systems approach or if additional expertise is needed. Without adequate knowledge of policy implementation and program office expertise, DOD cannot have reasonable assurance that an open systems approach is being implemented effectively by the services. Until DOD takes action to overcome these challenges, the department will likely continue to invest in costly proprietary systems. These steps should increase DOD’s ability to promote more competition, save taxpayer dollars, and more quickly field new capabilities to the warfighter, particularly if an open systems approach is incorporated into program strategies prior to the start of development at milestone B. We are making four recommendations to improve the department’s implementation of an open systems approach for UAS and other weapon acquisition programs, as well as its visibility of open systems implementation and program office expertise. We recommend that the Secretary of Defense direct the Secretaries of the Air Force and Army to implement their open systems policies by including an open systems approach in their acquisition strategies. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to define appropriate metrics to track programs’ implementation of an open systems approach. We also recommend that the Secretary of Defense direct the Secretaries of the Air Force, Army, and Navy to take the following actions: require their acquisition programs to include open systems metrics developed by the Under Secretary of Defense for Acquisition, Technology and Logistics in their systems engineering plans, track progress in meeting these metrics, and report their progress to the Under Secretary of Defense for Acquisition, Technology and Logistics at key acquisition milestones; and assess their respective service-level and program office capabilities relating to an open systems approach and work with the Deputy Assistant Secretary of Defense for Systems Engineering to develop short-term and long-term strategies to address any capability gaps identified. Strategies could include the Navy’s cross-cutting approach where a team of a few technical experts within the Naval Air Systems Command could be available to work with program offices, as necessary, to help develop open systems plans. DOD provided us with written comments on a draft of this report. DOD partially concurred with all four recommendations. DOD’s comments are reprinted in appendix II. DOD partially concurred with our first recommendation, agreeing on the value of implementing an open systems approach, but citing existing department policies and guidance which it believes are sufficient for the military departments to implement open systems architecture in acquisition programs. DOD also noted in its comments that the decision to implement an open systems approach in a particular program’s acquisition strategy is made on a case-by-case basis based on a number of considerations to include mission, threat, vulnerability assessment, operating environment, and business case. We agree that an open systems approach should be informed by these considerations, and we also cited both OSD and service-level policies and guidance governing an open systems approach in our report. However, a number of Air Force and Army unmanned aircraft programs missed opportunities to adopt an open systems approach early in their life cycles, but did try to do so later on when it becomes more costly and complex. Navy programs fared better partly because they augmented policy by addressing the open systems approach in individual acquisition strategies. We maintain that the Air Force and Army would benefit by including an open systems approach in their acquisition strategies before the start of system development. DOD partially concurred with our second and third recommendations. DOD noted that program implementation of an open systems approach should be assessed through its existing milestone decision process. DOD further noted that acquisition strategies and systems engineering plans, which document a program’s open systems strategy, are assessed at multiple decision reviews and are considered by the milestone decision authority in the overall context of the program. We agree that the milestone decision process is the appropriate venue to review programs’ open systems strategies. However, as discussed in our report, we found that OSD does not have adequate insight of the extent to which an open systems approach is being used by weapon acquisition programs and thus cannot have reasonable assurance of the widespread use of an open systems approach across the department. Based on our review of unmanned aircraft programs, we found that the Navy had generally embraced an open systems approach for its acquisition programs before the start of development, whereas the Air Force and Army had not. Further, it is unclear the extent to which the Air Force and Army will adopt the approach for its programs as DOD’s recent Better Buying Power initiative requires. To provide clearer insight into whether the services are planning for and implementing an open systems approach, we maintain that the Under Secretary of Defense for Acquisition, Technology and Logistics should establish metrics and track weapon acquisition programs’ progress in meeting them at key milestones. For example, this could include tracking whether programs are including open systems in their acquisition strategy documents prior to milestone B and are following through on those plans at subsequent acquisition milestones. DOD partially concurred with our fourth recommendation but did not explain its position or what, if anything, it would do in response. DOD did not comment on whether the military services and their program offices have sufficient capabilities with respect to open systems. As we discussed in our report, we found that OSD does not know if program offices have the systems engineering expertise required for effective implementation of an open systems approach or if additional expertise is needed. To address this possible gap, we continue to believe that the Air Force, Army, and Navy should assess their respective service-level and program office capabilities relating to an open systems approach and work with the Deputy Assistant Secretary of Defense for Systems Engineering to develop short-term and long-term strategies to address any capability gaps identified. As we suggested, one such strategy could be the Naval Air Systems Command’s cross-cutting approach where a team of a few technical experts could be available to work with program offices, as necessary. We are sending copies of this report to the Secretary of Defense, the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The House Armed Services Subcommittee on Tactical Air and Land Forces requested that we assess the Department of Defense’s (DOD) progress in implementing open systems in unmanned aircraft system (UAS) acquisitions. Since an open systems approach can facilitate competition and upgrades for any type of weapon system acquisition, and because DOD is directing its current open system initiatives department- wide, we evaluated the benefits of an open systems approach and challenges DOD is facing in adopting the approach broadly. Therefore, this report examines (1) the characteristics and benefits of an open systems approach, (2) DOD’s efforts in applying an open systems approach to its UAS portfolio, and (3) challenges, if any, DOD is encountering in implementing this approach. To determine the characteristics and benefits of an open systems approach, we reviewed relevant DOD policies, guidance, and handbooks and conducted a literature search to identify open systems-related examples from private companies and standards development organizations. We interviewed officials from four standards organizations that we selected based on our literature search: the Aerospace Industries Association, the Peripheral Component Interconnect Industrial Computer Manufacturers Group, SAE International, and the Universal Serial Bus Implementers Forum. These organizations have responsibility for developing open standards for their respective industries, and their experience is commensurate with the age of their industries. For example, the Universal Serial Bus Implementers Forum has existed for almost 20 years, and SAE International has existed for over 100 years. We also interviewed personnel from DOD contractors, including AAI, Dreamhammer, General Atomics-Aeronautical Systems Group, Insitu, Northrop Grumman, and Raytheon/Solipsys to identify their leading practices in developing open standards and implementing an open systems approach. In addition, we interviewed an expert from Carnegie Mellon’s Software Engineering Institute about experiences working with DOD on an open systems approach. We also interviewed DOD officials knowledgeable about open systems and open standards development from the Army, Navy, and the Office of the Secretary of Defense (OSD). We analyzed interviews with the open systems expert, DOD officials, standards organizations, and all four of the contractors who we asked questions related to open systems best practices (General Atomics, Northrop Grumman, Raytheon/Solipsys, and Dreamhammer) to identify key practices for successfully adopting an open systems approach. To determine the extent to which DOD is applying an open systems approach to its UAS portfolio, we interviewed and collected documentation, including briefings, acquisition strategies, and systems engineering plans, from the UAS Task Force and UAS program offices on open systems efforts. In addition, we developed a questionnaire to collect information on which elements of an open systems approach UAS programs incorporated, their use of guidance and tools, and their contracting strategies. After we drafted the questionnaire, we asked our Chief Technologist as well as a member of DOD’s Open Systems Architecture Data Rights Team to review and validate our questions. We revised the questionnaire to reflect comments from these reviews, then distributed the electronic Microsoft Word questionnaire by e-mail to 10 current and planned UAS acquisition programs from Groups 2-5—the larger UAS programs—in October 2012; all 10 responded that month. We excluded Group 1 UAS from our analysis because these are the smallest UAS systems and recent DOD open systems efforts focus on Group 2-5 UAS. Using each program’s questionnaire response and follow-up interviews conducted with each program office to corroborate the responses, we evaluated the key areas where UAS programs implemented an open systems approach—the air vehicle, the ground control station, and the system payloads—and determined at what point in the acquisition life cycle that UAS programs introduced their approach. To determine challenges acquisition programs face in implementing an open systems approach, we synthesized information from DOD policies and guidance, program questionnaires, and interviews with officials from the Office of the Deputy Assistant Secretary of Defense for Systems Engineering; the Office of the Secretary of Defense’s UAS Task Force; the Defense Acquisition University; the Office of the Deputy Assistant Secretary of the Navy for Research, Development, Test and Evaluation; acquisition offices within the Air Force, Army, and Navy; UAS program offices; and UAS contractor personnel. We conducted this performance audit from July 2012 through July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Cheryl Andrew, Assistant Director; Richard Burkard; Lisa Fisher; Danielle Greene; Laura Jezewski; Deanna Laufer; Brian Lepore; Jean McSween; Paige Muegenburg; Brian Mullins; and Roxanna Sun.
For fiscal year 2014, DOD requested over $11 billion to modify existing weapon systems--more than 10 percent of its total procurement budget. Traditionally, DOD has acquired proprietary systems, which are costly to upgrade and limit opportunities for competition. Through its Better Buying Power initiatives, DOD has re-emphasized the use of an open systems approach as a way to reduce costs through effective competition. GAO was asked to examine DOD's progress in implementing an open systems approach for UAS acquisitions. This report addresses (1) the characteristics and benefits of an open systems approach, (2) DOD's efforts in implementing an open systems approach for its UAS portfolio, and (3) challenges, if any, DOD is encountering in implementing this approach. GAO analyzed relevant literature and DOD policies on open systems and interviewed agency and private industry officials to understand how open systems have been implemented and their benefits. In addition, GAO assessed acquisition documents and questionnaire responses from 10 current and planned UAS programs to determine their open system strategies. An open systems approach, which includes a modular design and standard interfaces, allows components of a product (like a computer) to be replaced easily. This allows the product to be refreshed with new, improved components made by a variety of suppliers. Designing weapons as open systems offers significant repair, upgrade, and competition benefits that could translate to millions of dollars in savings as the weapons age. The services vary in their use of open systems on the Department of Defense's (DOD) 10 largest unmanned aircraft systems (UAS). The Navy used an open systems approach at the start of development for the air vehicle, ground control station, and payloads (i.e., cameras and radar sensors) for three of its four current and planned UAS and anticipates significant efficiencies. For example, Navy and contractor officials expect the Small Tactical UAS to be able to integrate at least 32 payloads developed by 24 manufacturers, some in a matter of days or months rather than years as previous programs experienced. Conversely, none of the Army or Air Force UAS programs initially implemented an open systems approach, relying instead on prime contractors to upgrade and modernize the UAS. The Army is now developing an open ground control station for each of its three legacy UAS programs. Only one of the Air Force's three UAS programs plans to implement an open systems approach on fielded aircraft. Policies and leadership can help drive DOD's acquisition community to use an open systems approach, but challenges exist. Although DOD and the services have policies that direct programs to use an open systems approach, the Navy is the only service that largely followed the policy when developing its UAS. In addition, while new open systems guidance, tools, and training are being developed, DOD is not tracking the extent to which programs are implementing this approach or if programs have the requisite expertise to implement the approach. Navy UAS program officials told us they relied on technical experts within Naval Air Systems Command to help develop an open systems approach for their programs. Until DOD ensures that the services are incorporating an open systems approach from the start of development and programs have the requisite open systems expertise, it will continue to miss opportunities to increase the affordability of its acquisition programs. GAO recommends that the Air Force and Army implement their open systems policies, DOD develop metrics to track open systems implementation, and the services report on these metrics and address any gaps in expertise. DOD partially concurred and stated that its current policies and processes are sufficient. GAO maintains additional action is needed.
The CSRS features that differ by employee group are discussed in the following sections. Members of Congress can retire at younger ages and with fewer years of service than can general employees and congressional staff. General employees and congressional staff are eligible for optional retirement at age 55 with 30 years of service, at age 60 with 20 years, or at age 62 with 5 years. Members can retire at the same age and service combinations but may also retire at age 50 with 20 years and at any age with 25 years. Additionally, Members may retire at age 60 with 10 years of Member service and at age 50 with service in 9 Congresses. Law enforcement officers and firefighters may retire at age 50 with 20 years of service as a law enforcement officer or firefighter. Air traffic controllers can retire at age 50 with 20 years of air traffic controller service or at any age with 25 years of air traffic controller service. The CSRS statute specifies the formulas that will be used in calculating benefit amounts for the various groups. For Members of Congress and congressional staff who have 5 or more years of congressional service, the formula is 2.5 percent of the average annual salaries they earned during their 3 consecutive highest-paid years (known as the “high 3”) for each year of congressional service. The same benefit formula applies to employees who are federal Claims Court judges, bankruptcy judges, U.S. Magistrates, or judges of the U.S. Court of Military Appeals. For example, the formula provides a benefit of 75 percent of high-3 salary to a Member or congressional staff with 30 years of congressional service. In comparison, the formula for general employees is 1.5 percent of high-3 for each of the first 5 years of service, 1.75 percent for each of the next 5 years of service, and 2 percent for each year of service greater than 10 years. The general employee formula provides a benefit of 56.25 percent of high 3 after 30 years of service. The benefit formula for law enforcement officers and firefighters is 2.5 percent of high 3 for each of the first 20 years of service and 2 percent for each year of service greater than 20. Thus, a law enforcement officer or firefighter who retires at age 50 with 20 years of service would receive 50 percent of high 3. After 30 years of service, the benefit would be 70 percent of high 3. Air traffic controllers are covered by the general employee benefit formula, but they are guaranteed to receive no less than 50 percent of their high 3 at retirement. To illustrate this point, a controller who retires at age 50 with 20 years of service receives 50 percent of his or her high 3 because the general employee formula would provide only 36.25 percent of high 3 after 20 years of service. At 25 years of service, a controller would still receive 50 percent of high 3 because the general employee formula would provide 46.25 percent. The “break-even” point occurs at just under 27 years of service when the general employee formula provides about 50 percent of high 3. When Members of Congress retire before age 60, their accrued benefits are reduced. The reduction is one-twelfth of 1 percent for each month (1 percent a year) they are between ages 55 and 60 and one-sixth of 1 percent for each month (2 percent a year) they are younger than age 55. There is no age reduction for any of the other groups covered by CSRS when they meet the optional retirement eligibility requirements. However, the reduction for Members younger than age 60 does not eliminate the overall advantages of their higher benefit formula compared with most other employees. A Member who retires at age 55 with 30 years of service receives a benefit equal to 71.25 percent of high 3 rather than the 75 percent he or she would otherwise receive without the age reduction (1 percent of accrued benefits for each of the 5 years the retiree is younger than age 60). General employees receive 56.25 percent of high 3 at age 55 and 30 years of service. Since the age reduction does not apply to congressional staff, they would receive 75 percent. Several groups, including law enforcement officers, firefighters, customs inspectors, and Veteran’s Affairs’ physicians, receive an additional advantage in their benefit calculations that is not afforded to other employee groups. Their high-3 salaries include certain types of premium pay. For example, the high 3 of law enforcement officers includes pay they receive for administratively uncontrollable overtime or availability pay.The high-3 amounts for other groups are limited to basic salaries and do not include any overtime they may have received. In certain circumstances, employees may retire before attaining the requirements for optional retirement or before they intended to retire. For example, general employees might be unable to continue in federal employment because their jobs were abolished or congressional staff might retire involuntarily if the Members of Congress who employ them are not reelected. Also, employees (except congressional staff) might be given the opportunity to voluntarily retire early to save the jobs of younger employees when their agencies are downsizing or transferring functions to other locations. CSRS does not include any separate early retirement provisions for Members of Congress. The optional retirement provisions apply if a Member loses an election. However, some of the age and service eligibility requirements available to Members for optional retirement are the same as the early retirement eligibility requirements for other employees under CSRS. General employees and congressional staff may retire early if they are age 50 with 20 years of service or any age with 25 years of service. Benefit amounts are reduced by one-sixth of 1 percent for each month (2 percent a year) they are younger than age 55. As discussed previously, Members may retire optionally at age 50 with 20 years of service, at any age with 25 years, or at age 50 with service in 9 Congresses (18 years). Similar to general employees, when Members retire under these provisions, their benefits are reduced by 2 percent for each year they are younger than age 55. However, unlike other employees, Members’ benefits are also reduced by 1 percent for each year they are between ages 55 and 60. Members who resign or are expelled from Congress cannot receive immediate benefits unless they are at least age 55 with 30 years of service, age 62 with 5 years of service, or age 60 with 10 years of Member service. There are no special provisions for law enforcement officers, firefighters, or air traffic controllers to retire before meeting their age and service requirements for optional retirement. In situations where law enforcement officers, firefighters, or air traffic controllers might voluntarily retire early or might be separated involuntarily, the early retirement provisions (including the benefit formula) used for general employees are applied to these groups. Many employees do not continue in their federal jobs long enough to meet the age and service requirements for immediate retirement benefits. Employees who quit their jobs before completing 5 years of service receive no retirement benefits. Their contributions to the system are refunded upon request. Employees who stay longer than 5 years but leave before retirement eligibility may elect to leave their contributions in the retirement fund and receive their earned benefits later under the system’s deferred retirement provisions. The deferred retirement provisions of CSRS are more generous for Members of Congress than for any other group. Deferred benefits for all employees, other than Members, are payable at age 62. Members may receive deferred benefits at age 62 if they had at least 5 years of federal service, but they are also eligible for deferred benefits at age 60 if they had at least 10 years of Member service or at age 50 if they had 20 years of federal service of which at least 10 were Member service (with the same benefit reductions as applied to Member optional retirements before age 60). Members of Congress receive another advantage under the deferred retirement provisions that is not available to any other employee group. CSRS provides benefits to the survivors of former Members who die in the interim between their separation from federal service and the age at which their deferred annuities would have commenced. Other former employees do not have this survivor protection. If they die before deferred benefit payments begin, their survivors are not eligible for survivor benefits. Rather, the former employees’ contributions to the retirement fund are refunded to the survivors and no further benefits are payable. The maximum benefit allowed under CSRS for general employees and congressional staff is 80 percent of high 3. For general employees, about 41 years and 11 months of service are necessary to accrue benefits equal to 80 percent of high 3. Congressional staff benefits reach 80 percent after 32 years of congressional service. The same maximum benefit applies to law enforcement officers, firefighters, and air traffic controllers, but, because of their mandatory retirement requirements (see pp. 8-9), these employees generally cannot work long enough to earn benefits of 80 percent of high 3. Maximum benefits for Members of Congress are determined somewhat differently, and this difference is more favorable to Members. Their maximum benefit (reached after 32 years of service) is 80 percent of the greater of their high 3 or final salary as a Member of Congress. If a Member leaves Congress to accept an appointive position, the final salary of that position is used as the basis for the maximum benefit if it is greater than the former Member’s high-3 salary or final salary as a Member. In some cases, retirees are reemployed by the government. The CSRS provisions covering reemployment of retired Members of Congress are significantly different from the provisions covering the reemployment of other retirees. In general, retirees (other than retired Members of Congress) who are reemployed by the government continue to receive their annuities, but their salaries are reduced by the amount of the annuity. A reemployed annuitant does not contribute to the retirement fund unless the individual chooses to have such deductions withheld. Reemployment for less than a year does not earn additional retirement benefits. However, annuitants reemployed for a year or more in positions subject to the retirement system are entitled to a supplemental annuity upon subsequent separation if they make retroactive contributions or already had deductions withheld. The supplemental annuity is based on the salary received during the reemployment period. Reemployed annuitants who serve for 5 years or more may, upon separation, elect to pay their retirement contributions retroactively (if they had not elected to have them withheld from pay) and have their annuities recomputed using their total service and new high-3 salary. When retired Members of Congress are reemployed in either elective or appointive positions, their annuities are suspended, and they again become covered by the system as if they had not retired. Reemployed Member retirees make contributions in the amount required for the positions they hold. Upon separation, they receive either (1) reinstated annuities increased by the cost-of-living adjustments that occurred during reemployment or (2) recomputed annuities with credit for their additional service, regardless of the length of the reemployment. In recognition of the differing retirement provisions for the various groups, the CSRS statute requires most groups with preferential benefits to make greater contributions to the retirement fund than general employees. General employees and air traffic controllers contribute 7 percent of their salaries. Members of Congress contribute 8 percent, and congressional staff, law enforcement officers, and firefighters contribute 7.5 percent. Any employee who has a premium pay included in the high-3 salary average must also make contributions from his/her premium pay. For example, law enforcement officers must contribute 7.5 percent of their administratively uncontrollable overtime or availability pay because these premium payments are included in their high-3 salary averages. General employees, law enforcement officers, firefighters, and air traffic controllers receive service credit in their benefit calculations for any full months of unused sick leave they have accumulated at the time of retirement. Sick leave is not used in determining retirement eligibility or high 3. For example, the benefit amount for a general employee at age 55 with 30 years of service and 1 year of unused sick leave would be calculated as if the employee had 31 years of service. The CSRS statute allows the sick leave credit only for employees who are covered by a formal leave system. Since Members of Congress and most congressional staff are not under a formal leave system, they cannot receive the sick leave credit. Of all the employee groups covered by CSRS, only law enforcement officers, firefighters, and air traffic controllers are subject to mandatory retirement provisions. Law enforcement officers must retire by age 57 or as soon thereafter as they complete 20 years of service. Firefighters must retire by age 55 or as soon thereafter as they complete 20 years of service. Individual employees in both groups can be retained to age 60 if their agency heads determine it is in the public interest for them to stay. The mandatory retirement age for air traffic controllers is age 56, regardless of their years of service, and they can be retained to age 61 when the agency head determines it is in the public interest. The basic design of FERS is much different from CSRS. In addition to a pension plan, FERS provides Social Security coverage to all employee groups and includes a Thrift Savings Plan in which all groups may participate. The Social Security and thrift plan provisions do not vary by employee group. The FERS pension plan continued the CSRS practice of providing preferential benefits to Members of Congress, congressional staff, law enforcement officers, firefighters, and air traffic controllers. However, some of the differing benefits under CSRS do not exist in the FERS pension plan. FERS eliminated the benefit reduction under CSRS that applies to Members of Congress who retire before age 60. FERS has no maximum benefit provision. Thus, the higher maximum benefits available to Members of Congress under CSRS do not exist in FERS. FERS provides benefits to the survivors of all employees and Members who have completed at least 10 years of service and die in the interim between their separation from federal service and the age at which their deferred annuities would have commenced. CSRS makes this benefit available only to Members of Congress. FERS does not grant service credit for unused sick leave to any employees, thereby not following the CSRS practice of providing a sick leave credit for all employee groups other than Members of Congress and most congressional staff. FERS eliminated CSRS’ preferential treatment of retired Members of Congress who become reemployed by the government. It requires that annuities for all reemployed retirees, including Members, be continued during the reemployment period but deducted from the retirees’ salaries. Any reemployed annuitant, including Members, must be reemployed at least 1 year before a supplemental annuity is payable and must be reemployed at least 5 years before the annuity can be recomputed. The features of the FERS pension plan that differ by employee group are discussed in the next section. The FERS pension plan raised the retirement age for general employees and congressional staff. It adopted a Minimum Retirement Age (MRA) concept that gradually increases, from age 55 to age 57, the earliest age at which general employees and congressional staff are eligible for optional retirement. However, FERS continued the CSRS practice of allowing Members of Congress to retire at younger ages and with fewer years of service than general employees and congressional staff. General employees and congressional staff are eligible to retire at the MRA with 30 years of service, at age 60 with 20 years, and at age 62 with 5 years. Members of Congress are eligible to retire at the same age and service combinations but may also retire at age 50 with 20 years of service or at any age with 25 years. FERS allows law enforcement officers, firefighters, and air traffic controllers to retire at age 50 with 20 years of service or at any age with 25 years. In this manner, the eligibility requirements for all three groups were made the same under FERS. (Under CSRS, eligibility to retire at any age with 25 years of service is available to air traffic controllers only.) FERS also continued the mandatory retirement requirements and associated provisions in CSRS for law enforcement officers, firefighters, and air traffic controllers. FERS added a provision not included in CSRS that allows Members of Congress, congressional staff, and general employees to retire at the MRA with 10 years of service. However, the accrued benefits for persons who retire under this provision are reduced by five-twelfths of 1 percent for each month (5 percent a year) they are younger than age 62. The benefit formulas for Members of Congress, congressional staff, law enforcement officers, firefighters, and air traffic controllers are all the same under the FERS pension plan. They receive 1.7 percent of their high-3 salaries for each of the first 20 years of service and 1 percent of high 3 for each year of service over 20 years. The FERS benefit formula for general employees is considerably less beneficial than the formula for the other groups. Benefits for general employees who retire before age 62 are calculated at 1 percent of high 3 for each year of service. To encourage later retirements, FERS uses a more generous benefit formula for general employees who retire at age 62 or older with at least 20 years of service. These employees’ benefits are calculated at 1.1 percent of high 3 for each year of service. To illustrate the effect of the different benefit formulas under the FERS pension plan, Members of Congress, congressional staff, law enforcement officers, firefighters, and air traffic controllers would all receive 44 percent of their high-3 salaries after 30 years of service. General employees would receive 30 percent if they were younger than age 62 and 33 percent if they were age 62 or older. Like CSRS, the FERS pension plan allows general employees and congressional staff to retire before attaining the age and service requirements for optional retirement. The age and service requirements for early retirement and the conditions under which early retirement may be granted are the same as those under CSRS. Also like CSRS, FERS does not include any separate early retirement provisions for Members of Congress, although some of the age and service requirements available to Members for optional retirement are the same as the early retirement requirements applicable to general employees and congressional staff. FERS also continued the CSRS practice of not allowing law enforcement officers, firefighters, and air traffic controllers to retire before meeting their age and service requirements for optional retirement unless they qualify for early retirement by adding service in other federal occupations. In such cases, the general employee provisions apply. A major difference between the CSRS and FERS provisions on early retirement is that FERS does not require reductions in earned benefits for employees who retire before attaining optional retirement eligibility. While CSRS requires benefits to be reduced by 2 percent for each year a retiree is younger than age 55, there is no such reduction in the FERS pension plan. The deferred retirement provisions under the FERS pension plan are considerably different from CSRS. The earliest age at which deferred benefits are available under CSRS, other than for Members of Congress, is age 62. However, FERS makes deferred benefits available to general employees and congressional staff when they would have qualified for optional retirement if they had not left the government. Under FERS, unreduced deferred benefits are paid to these employees at the MRA if they had at least 30 years of service, at age 60 if they had at least 20 years, and at age 62 with at least 5 years. To illustrate this point, an employee who resigns his job at age 52 after completing 30 years of service is eligible for deferred benefits at his MRA (age 55 to 57, depending on date of birth). Under CSRS, the employee must wait until age 62 for the deferred benefits to begin. The FERS pension plan allows Members of Congress, law enforcement officers, firefighters, and air traffic controllers to receive deferred benefits at the same ages and years of service as general employees if they were not eligible for optional retirement at the time of separation. Deferred benefits for law enforcement officers, firefighters, and air traffic controllers under these circumstances are calculated under the general employee formula, but Members of Congress and congressional staff retain their special benefit formula. The FERS pension plan also provides deferred benefits to all groups at the MRA if they had completed at least 10 years of service. Deferred benefit amounts paid in these circumstances are reduced by 5 percent for each year the former employees are younger than age 62. Like CSRS, the FERS pension plan requires the groups with preferential benefits to make greater contributions to the retirement fund than general employees. General employees are required to contribute 7 percent of their salaries less the Social Security tax rate (now 6.2 percent), exclusive of Medicare. The current general employee contribution rate to the FERS pension plan is 0.8 percent of salary. Members of Congress, congressional staff, law enforcement officers, firefighters, and air traffic controllers must contribute 7.5 percent of salary less their Social Security taxes. Their current contribution rate to the FERS pension plan is 1.3 percent of salary. We requested comments on a draft of this report from the Director of OPM or his designee. On May 8, 1995, we met with the Chief of the Retirement Policy Division to discuss the report. He agreed that the report accurately portrayed the major features of CSRS and FERS. He also suggested certain wording changes that he felt would better describe the differing provisions for the various employee groups. In general, we incorporated the suggested changes. A copy of this report is being sent to Congressman Dan Miller, who also asked us for information on congressional retirement. Copies of the report are also being sent to the Director of OPM and other parties interested in federal retirement matters and will be made available to others on request. Assistant Director Robert E. Shelton and Senior Evaluator Laura G. Shumway developed the information for this report. Please contact me on (202) 512-5074 if you or your staffs have any questions about this report. This appendix lists CSRS provisions that differ for the various employee groups they cover. Not only do the provisions vary by employee group, but they also vary by the type of retirement. The types of retirement shown in table I.1 include: optional retirement in which employees may opt to retire and receive an early retirement in which employees may retire and receive an immediate annuity before attaining the age and service requirements for optional retirement. The circumstances include (1) involuntary separations such as loss of employment through job abolishment or reduction-in-force and (2) voluntary separations occurring when an agency is undergoing a major reorganization, a major reduction-in-force, or a major transfer of function where a significant percentage of employees will be separated or have their salary grades reduced; and deferred retirement in which employees leave federal service before qualifying for optional or early retirement, i.e., before qualifying for an annuity that can begin immediately. Their annuities are deferred until they reach a specified age. Age 62, 5 years Age 55, 30 years (with reduction) Age 50, 20 years (with reduction) Any age, 25 years (with reduction) Age 50, service in nine Congresses (with reduction) None at age 60 and older Annuity reduced by one-twelfth of 1% for each month Member is between ages 55 and 60 (1% a year) and by one-sixth of 1% for each month Member is under age 55 (2% a year) 80% of high 3, excluding credit for unused sick leave 80% of high 3 80% of the greater of: - final basic pay of the Member, - high 3 of the Member, or - final basic pay of the appointive position of a former Member (continued) Age 50, 20 years Any age, 25 years (with reduction if under age 55) Same as optional retirement formula with reduction if under age 55 Annuity reduced by one-sixth of 1% for each month the employee is under age 55 (2% a year) None. See optional and deferred retirement provisions (with reduction if under age 55) Annuity payable when former Member reaches: - age 62 with at least 5 years of service - age 60 with at least 10 years of Member service - age 50 with at least 20 years of federal service, including at least 10 years Member service (with reduction) Air traffic controllers must retire at age 56, regardless of years of service(continued) Annuity continues upon reemployment, but the salary is reduced by the amount of the annuity as long as required contributions are made an individual employed: - at least one year can receive a supplemental annuity upon separation, and - at least five years can have the annuity recomputed based on total service and a new high 3 Annuity suspended upon reemployment and Member again makes required contributions Upon separation, the annuity recommences and is either (1) recomputed with credit for additional service regardless of how long the Member was reemployed or (2) reinstated with the cost-of-living adjustments that occurred during reemployment Members cannot receive an annuity under these age and service requirements if they resign or are expelled from Congress. The high 3 for law enforcement officers includes availability pay or pay for administratively uncontrollable overtime. Law enforcement officers and firefighters may be retained to age 60 when the agency head believes the public interest requires that they stay. This appendix lists FERS provisions that differ for the various employee groups they cover. Not only do the provisions vary by employee group, but they also vary by the type of retirement. The types of retirement shown in table II.1 include: optional retirement in which employees may opt to retire and receive an early retirement in which employees may retire and receive an immediate annuity before attaining the age and service requirements for optional retirement. The circumstances include (1) involuntary separations such as loss of employment through job abolishment or reduction-in-force and (2) voluntary separations occurring when an agency is undergoing a major reorganization, a major reduction-in-force, or a major transfer of function where a significant percentage of employees will be separated or have their salary grades reduced; and deferred retirement in which employees leave federal service before qualifying for optional or early retirement, i.e., before qualifying for an annuity that can begin immediately. Their annuities are deferred until they reach a specified age. Age 60, 20 years Age 62, 5 years MRA, 10 years (with reduction) Formula for determining the annuity amount 1.1% of high 3 for each year of service if age 62 or older with 20 years 1% of high-3 for each year of service if under age 62, or age 62 or older with less than 20 years None at ages MRA and 30 years, age 60 and 20 years, or age 62 and 5 years For employee retiring at MRA and 10 years, annuity reduced by five-twelfths of 1% for each month employee is under age 62 (5% a year) Age 62, 5 years Age 50, 20 yearsAny age, 25 yearsMRA, 10 years (with reduction) If 5 or more years of congressional service: 1.7% of high-3 for each of first 20 years of congressional service 1% of high-3 for each year over 20 years of congressional service and any other federal service (including military service) None at ages MRA and 30 years, age 60 and 20 years, age 62 and 5 years, age 50 and 20 years, or any age and 25 years For Member retiring at MRA and 10 years, annuity reduced by five-twelfths of 1% for each month Member is under age 62 (5% a year) Same as optional retirement formula (continued) Annuity payable when former employee reaches the age requirement for optional retirement: - MRA with at least 30 years - age 60 with at least 20 years - age 62 with at least 5 years - MRA with at least 10 years (with reduction) Same as optional retirement formula None at age 62, age 60 and 20 years, or MRA and 30 years For MRA and 10 years, annuity reduced by five-twelfths of 1% for each month the former employee is under age 62 (5% a year) If at least 10 years of service, annuity begins on the day after the former employee would have been - age 62, if less than 20 years service - age 60, if 20 through 29 years service - MRA, if 30 or more years service Alternatively, annuity can begin the day after death, but annuity computed actuarially equivalent to waiting for above age and service combinations Currently 0.8% of salary (7% of salary less the Social Security tax rate, other than Medicare) Air traffic controllers must retire at age 56 or as soon thereafter as they complete 20 years of service(continued) Annuity continues upon reemployment, but the salary is reduced by the amount of the annuity As long as required contributions are made, an individual employed: - at least one year can receive a supplemental annuity upon separation, and - at least five years can have the annuity recomputed based on total service and a new high 3 upon separation The high 3 for law enforcement officers includes availability pay or pay for administratively uncontrollable overtime. Law enforcement officers and firefighters may be retained to age 60 when the agency head believes the public interest requires that they stay. Figure III.1: Maximum CSRS Annuities Immediately Available to Employee Groups Under the Optional Retirement Provisions Law enforcement officers and firefighters Note 1: The CSRS optional retirement provisions are described in appendix I. Under these provisions an individual may voluntarily retire with an immediate annuity, which may be unreduced or reduced depending on his/her age and years of service at the time of retirement. Special early out voluntary retirements, disability retirements, and involuntary retirements are not considered optional. Note 2: These bars represent the maximum percentage of high-3 that is available at the years of service shown. The ages required to obtain these maximum percentages vary by group. Note 3: The percentages shown above the bars are rounded. For example, the 56 percent shown for general federal employees and air traffic controllers at 30 years of service is actually 56.25 percent. Law enforcement officers and firefighters Note 1: A missing bar means the employee group cannot retire at that age and years of service under the optional retirement provisions. Note 2: The percentages shown above the bars are rounded. For example, the 56 percent shown for air traffic controllers at 30 years of service is actually 56.25 percent. Law enforcement officers and firefighters Note 1: A missing bar means the employee group cannot retire at that age and years of service under the optional retirement provisions. Note 2: The percentages shown above the bars are rounded. For example, the 56 percent shown for general federal employees and air traffic controllers at 30 years of service is actually 56.25 percent. Note 1: A missing bar means the employee group cannot retire at that age and years of service under the optional retirement provisions. Note 2: The percentages shown above the bars are rounded. For example, the 56 percent shown for general federal employees at 30 years of service is actually 56.25 percent. Note 3: Law enforcement officers, firefighters, and air traffic controllers are not shown in this figure. In most cases, they have already retired because of their mandatory retirement ages. Note 1: The percentages shown above the bars are rounded. For example, the 56 percent shown for general federal employees at 30 years of service is actually 56.25 percent. Note 2: Law enforcement officers, firefighters, and air traffic controllers are not shown in this figure. In most cases, they have already retired because of their mandatory retirement ages. Figure IV.1: Maximum FERS Annuities Immediately Available to Employee Groups Under the Optional Retirement Provisions Law enforcement officers and firefighters (Figure notes on next page) Note 1: The FERS optional retirement provisions are described in appendix II. Under these provisions an individual may voluntarily retire with an immediate annuity, which may be unreduced or reduced depending on his/her age and years of service at the time of retirement. Special early out voluntary retirements, disability retirements, and involuntary retirements are not considered optional. Note 2: These bars represent the maximum percentage of high-3 that is available at the years of service shown. The ages required to obtain these maximum percentages vary by group. Note 3: The percentages shown above the bars are rounded. For example, the 9 percent shown for Members of Congress at 5 years of service is actually 8.5 percent. Note 4: If a general federal employee is age 62 with at least 20 years of service, the percentages are 22 percent with 20 years, 27.5 percent with 25 years, 33 percent with 30 years, and 38.5 percent with 35 years, because the multiplier increases from 1 percent for each year of service to 1.1 percent for each year of service.
Pursuant to a congressional request, GAO compared the retirement benefits available to members of Congress and congressional staff with those available to other employees under the Civil Service Retirement System (CSRS) and the Federal Employees Retirement System (FERS). GAO found that: (1) CSRS provisions for congressional members are generally more generous than those for general employees; (2) Members of Congress, law enforcement officers, firefighters, and air traffic controllers may retire at a younger age and with fewer years of service than general and congressional employees; (3) Members, congressional staff, law enforcement officers, firefighters, and court judges enjoy a higher benefit formula than general employees; (4) air traffic controllers are covered by the general employee benefit formula, but they are guaranteed a minimum 50-percent pension; (5) general employees and Members are subject to having their pensions reduced if they retire early; (6) some special employee groups have their premium pay included in their benefit formulas; (7) employees who receive preferential benefits are required to make greater payroll contributions; (8) the FERS pension plan has many of the CSRS advantages for Members, congressional staff, law enforcement officers, firefighters, and air traffic controllers and has eliminated or changed some CSRS provisions; and (9) under FERS, law enforcement officers, firefighters, and air traffic controllers enjoy provisions similar to those for Members of Congress.
The Department of Defense’s budget is the product of a complex process designed to develop an effective defense strategy that supports U.S. national security objectives. For munitions, the department generally does not have the combatant commands submit separate budgets, but relies on the military services’ budget submissions. Thus, the military services are largely responsible for determining requirements for the types and quantities of munitions that are bought. The Department of Defense Inspector General and GAO have issued numerous reports dating back to 1994 identifying systemic problems—such as questionable and inconsistently applied data, inconsistent processes among and between services, and unclear guidance—that have inflated the services’ requirements for certain categories of munitions and understated requirements for other categories. (For a listing of these reports, see app. II.) In 1997, as one step toward addressing these concerns, the Department of Defense issued Instruction 3000.4, which sets forth policies, roles and responsibilities, time frames, and procedures to guide the services as they develop their munitions requirements. This instruction is referred to as the capabilities-based munitions requirements process and is the responsibility of the Under Secretary of Defense for Acquisition, Technology, and Logistics. The instruction describes a multi-phased analytical process that begins when the Under Secretary of Defense for Policy develops—in consultation with the Chairman of the Joint Chiefs of Staff, the military services, and the combatant commands—policy for the Defense Planning Guidance. The Defense Intelligence Agency uses the Defense Planning Guidance and its accompanying scenarios, as well as other intelligence information, to develop a threat assessment. This assessment contains estimates and facts about the potential threats that the United States and allied forces could expect to meet in war scenarios. The combatant commanders (who are responsible for the theaters of war scenarios), in coordination with the Joint Chiefs of Staff, use the threat assessment to allocate each service a share of the identified targets by phases of the war. The services then develop their combat requirements using battle simulation models and scenarios to determine the number and mix of munitions needed to meet the combatant commanders’ specific objectives. Despite the department’s efforts to standardize the process and generate consistent requirements, many questions have continued to be raised about the accuracy or reliability of the munitions requirements determination process. In April 2001, we reported continuing problems with the capabilities-based munitions requirements determination process because the department (1) had yet to complete a database providing detailed descriptions of the types of targets on large enemy installations that would likely be encountered, based on warfighting scenarios; (2) had not set a time frame for completing its munitions effectiveness database; and (3) was debating whether to include greater specificity in its warfighting scenarios and to rate the warfighting scenarios by the probability of their occurrence. These process components significantly affect the numbers and types of munitions needed to meet the warfighting combatant command’s objectives. The department acknowledged these weaknesses and recognized that inaccurate requirements can negatively affect munitions planning, programming, and budget decisions, as well as assessments of the size and composition of the industrial production base. In responding to our report’s recommendations, the department has taken a number of actions to correct the problems we identified. Our review of the requirements process and related documentation showed that the Department of Defense corrected the previously identified systemic problems in its process for determining munitions requirements, but the reliability of the process continues to be uncertain because of the department’s failure to link the near-term munitions needs of the combatant commands and the purchases made by the military services based on computations derived from the department’s munitions requirements determination process. Because of differences in how requirements are determined, asking a question about the quantities of munitions that are needed can result in one answer from the combatant commanders and differing answers from the military services. For this reason, the combatant commands may report shortages of munitions they need to carry out warfighting scenarios. We believe—and the department’s assessment of its munitions requirements process recognizes—that munitions requirements and purchase decisions made by the military services should be more closely linked to the needs of the combatant commanders. The main issue that the department still needs to address is engaging the combatant commands in the requirements determination process, budgeting processes, and related purchasing decisions to minimize the occurrence of reported shortages. Because of the present gap between the combatant commands’ munitions needs and department’s requirements determination process, which helps shape the services’ purchasing decisions, munitions requirements are not consistently stated, and thus the amount of funding needed to alleviate possible shortages is not always fully understood. In April 2001, we reported that key components of the requirements determination process either had not been completed or had not been decided upon. At that time, the department had not completed a database listing detailed target characteristics for large enemy installations based on warfighting scenarios and had not developed new munitions effectiveness data to address deficiencies identified by the services and the combatant commanders. Additionally, the department had not determined whether to create more detailed warfighting scenarios in the Defense Planning Guidance or to rate scenarios in terms of their probability. We concluded that until these tasks were completed and incorporated into the process, questions would likely remain regarding the accuracy of the munitions requirements process as well as the department’s ability to identify the munitions most appropriate to defeat potential threats. In response to our report, the department took actions during fiscal years 2001 and 2002 to resolve the following three key issues affecting the reliability of the munitions requirements process: List of targets—The department lacked a common picture of the number and types of targets on large enemy installations as identified in the warfighting scenarios, and, as a result, each of the services had been identifying targets on enemy installations differently. To resolve this issue, the Joint Chiefs instructed the Defense Intelligence Agency, in coordination with the combatant commanders, to develop target templates that would provide a common picture of the types of potential targets on enemy installations. In August 2001, the department revised its capabilities-based requirements instruction to incorporate the target templates developed by the Defense Intelligence Agency as the authoritative threat estimate for developing munitions requirements. Munitions effectiveness data—The department was using outdated information to determine the effectiveness of a munition against a target and to predict the number of munitions necessary to defeat it. The department recognized that munitions effectiveness data is a critical component for requirements planning and that outdated information could over- or understate munitions requirements. To address this shortfall, the department updated its joint munitions effectiveness manual with up-to-date munitions effectiveness data for use by the services in their battle simulation models. Warfighting scenarios—The Defense Planning Guidance contains warfighting scenarios that detail conditions that may exist during the conduct of war; these scenarios are developed with input from several sources, including the Defense Intelligence Agency, the Joint Chiefs of Staff, and the services. This guidance should provide a common baseline from which the combatant commands and the services determine their munitions requirements. However, when the department adopted the capabilities-based munitions requirements instruction, details were eliminated in favor of broader guidance. To ensure that the combatant commanders and the services plan for the most likely warfighting scenario and do not use unlikely events to support certain munitions, the department revised the Defense Planning Guidance to provide fewer warfighting scenarios and more detail on each. The department expected that these actions to improve the munitions requirements process would correct over- or understated requirements and provide the combatant commands with needed munitions. However, despite the department’s efforts to enhance the requirements determination process, one problem area remains—inadequate linkage between the near-term munitions needs of the combatant commands and the purchases made by the military services based on computations derived from the department’s munitions requirements determination process. Various actions taken to address this issue have not been successful. The disjunction between the department’s requirements determination processes and combatant commanders’ needs is rooted in separate assessments done at different times. The services, as part of their budgeting processes, develop the department’s munitions requirements using targets provided by the combatant commands (based on the Defense Intelligence Agency’s threat report), battle simulation models, and scenarios to determine the number and mix of munitions needed to meet the combatant commanders’ objectives in each war scenario. To develop these requirements, the services draw upon and integrate data and assumptions from the Defense Planning Guidance, warfighting scenarios, and target allocations, as well as estimates of repair and return rates for enemy targets and projected assessments of damage to enemy targets and installations. Other munitions requirements are also determined, and include munitions needed (1) for forces not committed to support combat operations, (2) for forward presence and current operations, (3) to provide a post-theater of war combat capability, and (4) to train the forces, support service programs, and support peacetime operations. These requirements, in addition to the combat requirement, comprise the services’ total munitions requirement. The total munitions requirement is then compared to available inventory and appropriated funds to determine how many of each munition the services will procure within their specified funding limits and is used to develop the services’ Program Objectives Memorandum and their budget submissions to the President. Periodically the combatant commanders prepare reports of their readiness status, including the availability of sufficient types and quantities of munitions needed to meet the combatant commanders’ warfighting objectives, but these munitions needs are not tied to the services’ munitions requirements or to the budgeting process. In determining readiness, the combatant commanders develop their munitions needs using their own battle simulation models, scenarios, and targets and give emphasis to the munitions they prefer to use or need for unique war scenarios to determine the number and mix of munitions they require to meet their warfighting objectives. The combatant commanders calculate their needs in various ways—unconstrained and constrained and over various time periods (e.g., 30 days and 180 days). Unconstrained calculations are based on the combatant commanders’ assessment of munitions needs, assuming that all needed munitions are available. Constrained calculations represent the combatant commanders’ assessment of munitions needs to fight wars under certain rules of engagement that limit collateral damage and civilian and U.S. military casualties. Because the combatant commanders’ battle simulation models and scenarios differ from those used by the military services, their munitions needs are different, which can result in reports of munitions shortages. In contrast, the U.S. Special Operations Command develops its combat requirements for the number and mix of munitions needed to meet its warfighting objectives using the same battle simulation models and scenarios that the services used and provides these requirements to the services, rather than providing only potential targets to the services as other commands do. This permits the U.S. Special Operations Command to more directly influence the assumptions about specific weapons systems and munitions to be used. As a result of working together, the Command’s and the services’ requirements are the same. In an effort to close the gap between the combatant commanders’ needs and the department’s munitions requirements determination process, a 1999 pilot project was initiated by the department to bridge this gap by better aligning the combatant commanders’ near-term objectives (which generally cover a 2-year period) and the services’ long-term planning horizon (which is generally 6 years). Another benefit of the pilot was that the Joint Chiefs of Staff could validate the department’s munitions requirements by matching requirements to target allocations. However, the Army, the Navy, and a warfighting combatant commander objected to the pilot’s results because it allocated significantly more targets to the Air Force and fewer targets to the Army. Army officials objected that the pilot’s methodology did not adequately address land warfare, which is significantly different from air warfare. The Navy did not concur with the results, citing the lack of recognition for the advanced capabilities of future munitions. U.S. Central Command officials disagreed with the results, stating that a change in methodology should not in and of itself cause the allocation to shift. In July 2000, citing substantial concerns about the pilot, the Under Secretary of Defense for Acquisition, Technology, and Logistics suspended the target allocation for fiscal year 2000 and directed the services to use the same allocations applied to the fiscal year 2002 to the 2007 Program Objectives Memorandum. In August 2000, the Joint Chiefs of Staff made another attempt to address the need for better linkage between the department’s munitions requirements process and the combatant commanders’ munitions needs. The combatant commanders were to prepare a near-term target allocation using a methodology developed by the Joint Chiefs of Staff. Each warfighting combatant commander developed two allocations—one for strike (air services) forces and one for engagement (land troops) forces for his area of responsibility. The first allocated specific targets to strike forces under the assumption that the air services can eliminate the majority of enemy targets. The second allocation assumed that less than perfect conditions exist (such as bad weather), which would limit the air services’ ability to destroy their assigned targets and require that the engagement force complete the mission. The combatant commanders did not assign specific targets to the engagement forces, but they estimated the size of the expected remaining enemy land force. The Army and the Marines then were expected to arm themselves to defeat those enemy forces. The Joint Chiefs of Staff used the combatant commanders’ near- year threat distribution and extrapolated that information to the last year of the Program Objectives Memorandum for the purpose of the services’ munitions requirements planning. The department expected that these modifications would correct over- or understated requirements and bridge the gap between the warfighting combatant commanders’ near-term interests and objectives and the services’ longer planning horizon. However, inadequate linkage remains between the near-term munitions needs of the combatant commands and the department’s munitions requirements determinations and purchases made by the military services. This is sometimes referred to as a difference between the combatant commanders’ near-term focus (generally 2 years) and the services longer- term planning horizon (generally 6 years). However, we believe that there is a more fundamental reason for the disconnect; it occurs because the department’s munitions requirements determination process does not fully consider the combatant commanders’ preferences for munitions and weapon systems to be used against targets identified in projected scenarios. On June 18, 2002, the department contracted with TRW Inc. to assess its munitions requirements process and develop a process that will include a determination of the near-year and out-year munitions requirements. The assessment, which will build upon the capabilities-based munitions requirements process, is also expected to quantify risk associated with any quantity differential associated between requirements and inventory and achieve a balance between inventory, production, and consumption. A final report on this assessment is due in March 2003. The department’s munitions requirements process provides varying answers for current munitions acquisitions because of the inadequate linkage between the near-term munitions needs of the combatant commands and the munitions requirements computed by the military services. As a result, the services are purchasing some critically needed munitions based on available funding and the contractors’ production capacity. For example, in December 2001, both the services and the combatant commanders identified shortages for joint direct attack munitions (a munition preferred by each of the combatant commanders). According to various Department of Defense officials, these amounts differed and exceeded previously planned acquisition quantities. Therefore, the department entered into an agreement to purchase the maximum quantities that it could fund the contractor to manufacture and paid the contractor to increase its production capacity. In such cases, the department could purchase too much or too little, depending upon the quantities of munitions ultimately needed. While this approach may be needed in the short term, it raises questions whether over the long term it would position the services to make the most efficient use of appropriated funds and whether the needs of combatant commands to carry out their missions will be met. Until the department establishes a more direct link between the combatant commanders’ needs, the department’s requirements determinations, and the services’ purchasing decisions, the department will be unable to determine with certainty the quantities and types of munitions the combatant commanders need to accomplish their missions. As a result, the amount of munitions funds needed will remain uncertain, and assessments of the size and composition of the industrial production base will be negatively affected. Unless this issue is resolved, the severity of the situation will again be apparent when munitions funding returns to normal levels and shortages of munitions are identified by the combatant commands. We recommend that the Secretary of Defense establish a direct link between the munitions needs of the combatant commands—recognizing the impact of weapons systems and munitions preferred or expected to be employed—and the munitions requirements determinations and purchasing decisions made by the military services. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Government Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. The Director of the Office of the Under Secretary of Defense’s Strategic and Tactical Systems provided written comments on a draft of this report. They are included in appendix III. The Department of Defense concurred with the recommended linkage of munitions requirements and combatant commanders’ needs. The Director stated that the department, through a munitions requirements study directed by the fiscal year 2004 Defense Planning Guidance, has identified this link as a problem and has established a solution that will be documented in the next update of Instruction 3000.4 in fiscal year 2003. The department also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Director, Office of Management and Budget. The report is also available on GAO’s Web site at http://www.gao.gov. The scope and methodology of our work is presented in appendix I. If you or your staff have any questions on the matters discussed in this letter, please contact me at (202) 512-4300. Key contributors to this letter were Ron Berteotti, Roger Tomlinson, Tommy Baril, and Nelsie Alcoser. To determine the extent to which improvements had been made to the Department of Defense’s requirements determination process, we reviewed the Department’s Instruction 3000.4, Capabilities-Based Munitions Requirements (to ascertain roles and oversight responsibilities and to identify required inputs into the process); 17 Department of Defense Inspector General reports and 4 General Accounting Office reports relating to the department’s munitions requirements determination process (to identify reported weaknesses in the requirements determination process); and reviewed requirements determinations and related documentation and interviewed officials (to identify actions taken to correct weaknesses in the requirements determination process) from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, D.C.; Joint Chiefs of Staff (Operations, Logistics, Force Structure, Resources and Assessment), Washington, D.C.; and Army, Navy, and Air Force officials responsible for budgeting, buying, and allocating munitions. To determine whether the munitions requirements determination process was being used to guide current munitions acquisitions, we met with the services’ headquarters officials (to determine how each service develops its munitions requirements, to obtain data on the assumptions and inputs that go into its simulation models, to see how each service reviews the outcome of its munitions requirement process, and to determine the basis for recent munitions purchases) and interviewed officials at U.S. Central Command and U.S. Special Operations Command, MacDill Air Force Base, Florida; U.S. Southern Command, Miami, Florida; U.S. Pacific Command; Headquarters Pacific Air Forces; U.S. Army Pacific; Marine Forces Pacific; U.S. Pacific Fleet, Oahu, Hawaii; U.S. Forces Korea; Eighth U.S. Army, Seoul, Korea; and 7th Air Force, Osan, Korea (to determine whether the munitions needed by the warfighters are available). We performed our review from March 2002 through July 2002 in accordance with generally accepted government auditing standards. Defense Logistics: Unfinished Actions Limit Reliability of the Munition Requirements Determination Process. GAO-01-18. Washington, D.C.: April 2001. Summary of the DOD Process for Developing Quantitative Munitions Requirements. Department of Defense Inspector General. Washington, D.C.: February 24, 2000. Air Force Munitions Requirements. Department of Defense Inspector General. Washington, D.C.: September 3, 1999. Defense Acquisitions: Reduced Threat Not Reflected in Antiarmor Weapon Acquisitions. GAO/NSIAD-99-105. Washington, D.C.: July 22, 1999. U.S. Special Operations Command Munitions Requirements. Department of Defense Inspector General. Washington, D.C.: May 10, 1999. Marine Corps Quantitative Munitions Requirements Process. Department of Defense Inspector General. Washington, D.C.: December 10, 1998. Weapons Acquisitions: Guided Weapon Plans Need to be Reassessed. GAO/NSIAD-99-32. Washington, D.C.: December 9, 1998. Navy Quantitative Requirements for Munitions. Department of Defense Inspector General. Washington, D.C.: December 3, 1998. Army Quantitative Requirements for Munitions. Department of Defense Inspector General. Washington, D.C.: June 26, 1998. Management Oversight of the Capabilities-Based Munitions Requirements Process. Department of Defense Inspector General. Washington, D.C.: June 22, 1998. Threat Distributions for Requirements Planning at U.S. Central Command and U.S. Forces Korea. Department of Defense Inspector General. Washington, D.C.: May 20, 1998. Army’s and Marine Corps’ Quantitative Requirements for Blocks I and II Stinger Missiles. Department of Defense Inspector General. Washington, D.C.: June 25, 1996. U.S. Combat Air Power–Reassessing Plans to Modernize Interdiction Capabilities Could Save Billions. Department of Defense Inspector General. Washington, D.C.: May 13, 1996. Summary Report on the Audits of the Anti-Armor Weapon System and Associated Munitions. Department of Defense Inspector General. Washington, D.C.: June 29, 1995. Weapons Acquisition: Precision Guided Munitions in Inventory, Production, and Development. GAO/NSIAD-95-95. Washington, D.C.: June 23, 1995. Acquisition Objectives for Antisubmarine Munitions and Requirements for Shallow Water Oceanography. Department of Defense Inspector General. Washington, D.C.: May 15, 1995. Army’s Processes for Determining Quantitative Requirements for Anti-Armor Systems and Munitions. Department of Defense Inspector General. Washington, D.C.: March 29, 1995. The Marine Corps’ Process for Determining Quantitative Requirements for Anti-Armor Munitions for Ground Forces. Department of Defense Inspector General. Washington, D.C.: October 24, 1994. The Navy’s Process for Determining Quantitative Requirements for Anti-Armor Munitions. Department of Defense Inspector General. Washington, D.C.: October 11, 1994. The Air Force’s Process for Determining Quantitative Requirements for Anti-Armor Munitions. Department of Defense Inspector General. Washington, D.C.: June 17, 1994. Coordination of Quantitative Requirements for Anti-Armor Munitions. Department of Defense Inspector General. Washington, D.C.: June 14, 1994.
The Department of Defense (DOD) planned to spend $7.9 billion on acquiring munitions in fiscal year 2002. Ongoing military operations associated with the global war on terrorism have heightened concerns about the unified combatant commands having sufficient quantities of munitions. Since 1994, the DOD Inspector General and GAO have issued numerous reports identifying weaknesses and expressing concerns about the accuracy of the process used by the department to determine munitions requirements. DOD has improved its munitions requirements process by eliminating most of the systematic problems--correcting questionable and inconsistently applied data, completing target templates, and resolving issues involving the level of detail that should be included in planning guidance. However, a fundamental problem remains unaddressed--inadequate linkage between the near-term munitions needs of the combatant commands and the purchases made by the military services based on computations derived from the department's munitions requirement determination process. The department's munitions requirements process provides varied answers for current munitions acquisitions questions because of the aforementioned disjunction. As a result, the services, in the short term, are purchasing some critically needed munitions based on available funding and contractors' production capacity. Although this approach may be necessary in the short term, it raises questions as to whether over the long term it would position the services to make the most efficient use of appropriated funds and whether the needs of combatant commands to carry out their missions will be met.
When providers at VAMCs determine that a veteran needs outpatient specialty care, they request and manage consults using VHA’s clinical consult process. Clinical consults include requests by physicians or other providers for both clinical consultations and procedures. A clinical consultation is a request seeking an opinion, advice, or expertise regarding evaluation or management of a patient’s specific clinical concern, whereas a procedure is a request for a specialty procedure such as a colonoscopy. Clinical consults are typically requested by a veteran’s primary care provider using VHA’s electronic consult system. Once a provider sends a request, VHA requires specialty care providers to review it within 7 days and determine whether to accept the consult. If the specialty care provider accepts the consult—determines the consult is needed and is appropriate—an appointment is made for the patient to receive the consultation or procedure. In some cases, a provider may discontinue a consult for several reasons, including that the care is not needed, the patient refuses care, or the patient is deceased. In other cases the specialty care provider may determine that additional information is needed, and will send the consult back to the requesting provider, who can resubmit the consult with the needed information. Once the appointment is held, VHA’s policy requires the specialty care provider to appropriately document the results of the consult, which would then close out the consult as completed in the electronic system. VHA’s current guideline is that consults should be completed within 90 days of the request.they were unable to complete the consult. If an appointment is not held, staff are to document why In 2012, VHA created a database to capture all consults systemwide and, after reviewing these data, determined that the data were inadequate for monitoring consults. One issue identified was the lack of standard processes and uses of the electronic consult system across VHA. For example, in addition to requesting consults for clinical concerns, the system was also being used to request and manage a variety of administrative tasks, such as requesting patient travel to appointments. Additionally, VHA could not accurately determine whether patients actually received the care they needed or if they received the care in a timely fashion. According to VHA officials, approximately 2 million consults (both clinical and administrative consults) were unresolved for more than 90 days. Subsequently, VA’s Under Secretary for Health convened a task force to address these and other issues regarding VHA’s consult system, among other things. In response to task force recommendations, in May 2013, VHA launched the Consult Management Business Rules Initiative to standardize aspects of the consult process, with the goal of developing consistent and reliable information on consults across all VAMCs. This initiative requires VAMCs to complete four specific tasks between July 1, 2013, and May 1, 2014: Review and properly assign codes to consistently record consult requests in the consult system; Assign distinct identifiers in the electronic consult system to differentiate between clinical and administrative consults; Develop and implement strategies for requesting and managing requests for consults that are not needed within 90 days—known as “future care” consults; and Conduct a clinical review as warranted, and as appropriate, close all unresolved consults—those open more than 90 days. At the time of our December 2012 review, VHA measured outpatient medical appointment wait times as the number of days elapsed from the patient’s or provider’s desired date, as recorded in the VistA scheduling system by VAMCs’ schedulers. In fiscal year 2012, VHA had a goal of completing new and established patient specialty care appointments within 14 days of the desired date. VHA established this goal based on its performance reported in previous years. To facilitate accountability for achieving its wait time goals, VHA includes wait time measures—referred to as performance measures—in its VISN directors’ and VAMC directors’ performance contracts, and VA includes measures in its budget submissions and performance reports to Congress and stakeholders. The performance measures, like wait time goals, have changed over time. Officials at VHA’s central office, VISNs, and VAMCs all have oversight responsibilities for the implementation of VHA’s scheduling policy. For example, each VAMC director, or designee, is responsible for ensuring that clinics’ scheduling of medical appointments complies with VHA’s scheduling policy and for ensuring that any staff who can schedule medical appointments in the VistA scheduling system have completed the required VHA scheduler training. In addition to the scheduling policy, VHA has a separate directive that establishes policy on the provision of telephone service related to clinical care, including facilitating telephone access for medical appointment management. As we reported in our April 9, 2014, testimony statement, our preliminary work identified examples of delays in veterans receiving requested VAMC officials outpatient specialty care at the five VAMCs we reviewed.cited increased demand for services, and patient no-shows and cancelled appointments, among the factors that hinder their ability to meet VHA’s guideline for completing consults within 90 days. Specifically, several VAMC officials discussed a growing demand for both gastroenterology procedures, such as colonoscopies, as well as consultations for physical therapy evaluations. Additionally, officials noted that due to difficulty in hiring and retaining specialists for these two clinical areas, they have developed periodic backlogs in providing services. Officials at these facilities indicated that they try to mitigate backlogs by referring veterans for care with non-VA providers. However, this strategy does not always prevent delays in veterans receiving timely care. For example, officials from two VAMCs told us that non-VA providers are not always available. Examples of consults that were not completed in 90 days include: For 3 of 10 gastroenterology consults we reviewed for one VAMC, we found that between 140 and 210 days elapsed from the dates the consults were requested to when the patient received care. For the consult that took 210 days, an appointment was not available and the patient was placed on a waiting list before having a screening colonoscopy. For 4 of the 10 physical therapy consults we reviewed for one VAMC, we found that between 108 and 152 days elapsed, with no apparent actions taken to schedule an appointment for the veteran. The patients’ files indicated that due to resource constraints, the clinic was not accepting consults for non-service-connected physical therapy evaluations. In 1 of these cases, several months passed before the veteran was referred to non-VA care, and he was seen 252 days after the initial consult request. In the other 3 cases, the physical therapy clinic sent the consults back to the requesting provider, and the veterans did not receive care for that consult. For all 10 of the cardiology consults we reviewed for one VAMC, we found that staff initially scheduled patients for appointments between 33 and 90 days after the request, but medical files indicated that patients either cancelled or did not show for their initial appointments. In several instances patients cancelled multiple times. In 4 of the cases VAMC staff closed the consults without the patients being seen; in the other 6 cases VAMC staff rescheduled the appointments for times that exceeded the 90-day timeframe. As we also reported, our preliminary work identified variation in how the five VAMCs we reviewed have implemented key aspects of VHA’s business rules, which limits the usefulness of the data in monitoring and overseeing consults systemwide. As previously noted, VHA’s business rules were designed to standardize aspects of the consult process, thus creating consistency in VAMCs’ management of consults. However, VAMCs have reported variation in how they are implementing certain tasks required by the business rules. For example, VAMCs have developed different strategies for managing future care consults— requests for specialty care appointments that are not clinically needed for more than 90 days. At one VAMC, officials reported that specialty care providers have been instructed to discontinue consults for appointments that are not needed within 90 days and requesting providers are to track these consults outside of the electronic consult system and resubmit them closer to the date the appointment is needed. These consults would not appear in VHA’s systemwide data once they have been discontinued. At another VAMC, officials stated that appointments for specialty care consults are scheduled regardless of whether the appointments are needed beyond 90 days. These future care consults would appear in VHA consult data and would eventually appear on a timeliness report as consults open greater than 90 days. Officials from this VAMC stated that they continually have to explain to VISN officials who monitor the VAMC’s consult timeliness that these open consults do not necessarily mean that care has been delayed. Officials from another VAMC reported piloting a strategy in its gastroenterology clinic where future care consults are entered in an electronic system separate from the consult and appointment scheduling systems. Approximately 30 to 60 days before the care is needed the requesting provider is notified to enter the consult request in the electronic consult system for the specialty care provider to complete. In addition, oversight of the implementation of VHA’s business rules has been limited and has not included independent verification of VAMC actions. VAMCs were required to self-certify completion of each of the four tasks outlined in the business rules. VISNs were not required to independently verify that VAMCs appropriately completed the tasks. Without independent verification, VHA cannot be assured that VAMCs implemented the tasks correctly. Furthermore, VHA did not require that VAMCs document how they addressed unresolved consults that were open greater than 90 days, and none of the five VAMCs in our review were able to provide us with specific documentation in this regard. VHA officials estimated that as of April 2014, about 450,000 of the approximately 2 million consults (both clinical and administrative consults) remained unresolved systemwide. VAMC officials noted several reasons that consults were either completed or discontinued in this process of addressing unresolved consults, including improper recording of consult notes, patient cancellations, and patient deaths. At one of the VAMCs we reviewed, a specialty care clinic discontinued 18 consults the same day that a task for addressing unresolved consults was due. Three of these 18 consults were part of our random sample, and our review found no indication that a clinical review was conducted prior to the consults being discontinued. Ultimately, the lack of independent verification and documentation of how VAMCs addressed these unresolved consults may have resulted in VHA consult data that inaccurately reflected whether patients received the care needed or received it in a timely manner. Although VHA’s business rules were intended to create consistency in VAMCs’ consult data, our preliminary work identified variation in managing key aspects of consult management that are not addressed by the business rules. For example, there are no detailed systemwide VHA policies on how to handle patient no-shows and cancelled appointments, particularly when patients repeatedly miss appointments, which may make VAMCs’ consult data difficult to assess. For example, if a patient cancels multiple specialty care appointments, the associated consult would remain open and could inappropriately suggest delays in care. To manage this type of situation, one VAMC developed a local consult policy referred to as the “1-1-30” rule. The rule states that a patient must receive at least 1 letter and 1 phone call, and be granted 30 days to contact the VAMC to schedule a specialty care appointment. If the patient fails to do so within this time frame, the specialty care provider may discontinue the consult. According to VAMC officials, several of the consults we reviewed would have been discontinued before reaching the 90-day threshold if the 1-1-30 rule had been in place at the time. Three VAMCs included in our review also noted some type of policy addressing patient no-shows and cancelled appointments, each of which varied in its requirements. Without a standard policy across VHA addressing patient no-shows and cancelled appointments, VHA consult data may reflect numerous variations of how VAMCs handle patient no-shows and cancelled appointments. In December 2012, we reported that VHA’s reported outpatient medical appointment wait times were unreliable and that inconsistent implementation of VHA’s scheduling policy may have resulted in increased wait times or delays in scheduling timely outpatient medical appointments. Specifically, we found that VHA’s reported wait times were unreliable because of problems with recording the appointment desired date in the scheduling system. Since, at the time of our review, VHA measured medical appointment wait times as the number of days elapsed from the desired date, the reliability of reported wait time performance was dependent on the consistency with which VAMC schedulers recorded the desired date in the VistA scheduling system. However, VHA’s scheduling policy and training documents were unclear and did not ensure consistent use of the desired date. Some schedulers at VAMCs that we visited did not record the desired date correctly. For example, the desired date was recorded based on appointment availability, which would have resulted in a reported wait time that was shorter than the patient actually experienced. At each of the four VAMCs we visited, we also found inconsistent implementation of VHA’s scheduling policy, which impeded scheduling of timely medical appointments. For example, we found the electronic wait list was not always used to track new patients that needed medical appointments as required by VHA scheduling policy, putting these patients at risk for delays in care. Furthermore, VAMCs’ oversight of compliance with VHA’s scheduling policy, such as ensuring the completion of required scheduler training, was inconsistent across facilities. VAMCs also described other problems with scheduling timely medical appointments, including VHA’s outdated and inefficient scheduling system, gaps in scheduler and provider staffing, and issues with telephone access. For example, officials at all VAMCs we visited reported that high call volumes and a lack of staff dedicated to answering the telephones affected their ability to schedule timely medical appointments. VA concurred with the four recommendations included in our December 2012 report and reported continuing actions to address them. First, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to improve the reliability of its outpatient medical appointment wait time measures. In response, VHA officials stated that they implemented more reliable measures of patient wait times for primary and specialty care. In fiscal years 2013 and 2014, primary and specialty care appointments for new patients have been measured using time stamps from the VistA scheduling system to report the time elapsed between the date the appointment was created—instead of the desired date—and the date the appointment was completed. VHA officials stated that they made the change from using desired date to creation date based on a study that showed a significant association between new patient wait times using the date the appointment was created and self-reported patient satisfaction with the timeliness of VHA appointments. VA, in its FY 2013 Performance and Accountability Report, reported that VHA completed 40 percent of new patient specialty care appointments within 14 days of the date the appointment was created in fiscal year 2013; in contrast, VHA completed 90 percent of new patient specialty care appointments within 14 days of the desired date in fiscal year 2012. VHA also modified its measurement of wait times for established patients, keeping the appointment desired date as the starting point, and using the date of the pending scheduled appointment, instead of the date of the completed appointment, as the end date for both primary and specialty care. VHA officials stated that they decided to use the pending appointment date instead of the completed appointment date because the pending appointment date does not include the time accrued by patient no-shows and cancelled appointments. Second, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to ensure VAMCs consistently implement VHA’s scheduling policy and ensure that all staff complete required training. In response, VHA officials stated that the department is in the process of revising the VHA scheduling policy to include changes, such as the new methodology for measuring wait times, and improvements and standardization of the use of the electronic wait list. In the interim, VHA distributed guidance, via memo, to VAMCs in March 2013 describing this information and also offered webinars to VHA staff on eight dates in April and May of 2013. To assist VISNs and VAMCs in the task of verifying that all staff have completed required scheduler training, VHA has developed a database that will allow a VAMC to identify all staff who have scheduled appointments and the volume of appointments scheduled by each; VAMC staff can then compare this information to the list of staff that have completed the required training. However, VHA officials have not established a target date for when this database would be made available for use by VAMCs. Third, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to require VAMCs to routinely assess scheduling needs for purposes of allocation of staffing resources. VHA officials stated that they are continuing to work on identifying the best methodology to carry out this recommendation, but stated that the database that tracks the volume of appointments scheduled by individual staff also may prove to be a viable tool to assess staffing needs and the allocation of resources. VHA officials stated that they needed to discuss further how VAMCs could use this tool, and that they had not established a targeted completion date for actions to address this recommendation. Finally, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to ensure that VAMCs provide oversight of telephone access, and implement best practices to improve telephone access for clinical care. In response, VHA required each VISN director to require VAMCs to assess their current telephone service against the VHA telephone improvement guide and to electronically post an improvement plan with quarterly updates. VAMCs are required to routinely update progress on the improvement plan. VHA officials cited improvement in telephone response and call abandonment rates since VAMCs were required to implement improvement plans. Additionally, VHA officials said that the department has also contracted with an outside vendor to assess VHA’s telephone infrastructure and business process. In April 2014, VHA officials stated that they expected to receive the first report in approximately 2 months. Although VA has initiated actions to address our recommendations, we believe that continued work is needed to ensure these actions are fully implemented in a timely fashion. Furthermore, it is important that VA assess the extent to which these actions are achieving improvements in medical appointment wait times and scheduling oversight as intended. Ultimately, VHA’s ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care to veterans, who may have medical conditions that worsen if access is delayed. Chairman Sanders, Ranking Member Burr, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Bonnie Anderson, Assistant Director; Janina Austin, Assistant Director; Rebecca Abela; Jennie Apter; Jacquelyn Hamilton; David Lichtenfeld; Brienne Tierney; and Ann Tynan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Access to timely medical appointments is critical to ensuring that veterans obtain needed medical care. Over the past few years, there have been numerous reports of VAMCs failing to provide timely care to patients, including specialty care, and in some cases, these delays have resulted in harm to patients. In December 2012, GAO reported that improvements were needed in the reliability of VHA's reported medical appointment wait times, as well as oversight of the appointment scheduling process. Also in 2012, VHA found that systemwide consult data could not be adequately used to determine the extent to which veterans experienced delays in receiving outpatient specialty care. In May 2013, VHA launched the Consult Management Business Rules Initiative with the aim of standardizing aspects of the consults process. This statement highlights (1) preliminary observations GAO made in an April 9, 2014, testimony statement regarding VHA's management of outpatient specialty care consults, and (2) concerns GAO raised in its December 2012 report regarding VHA's outpatient medical appointment scheduling, and progress made implementing GAO's recommendations. To conduct this work, GAO reviewed documents and interviewed officials from VHA's central office. Additionally, GAO interviewed officials from five VAMCs for the consults work and four VAMCs for the scheduling work that varied based on size, complexity, and location. As GAO previously reported in its testimony on April 9, 2014, its preliminary work examining the Department of Veterans Affairs' (VA), Veterans Health Administration's (VHA) management of outpatient specialty care consults identified examples of delays in veterans receiving outpatient specialty care, as well as limitations in the implementation of new consult business rules designed to standardize aspects of the clinical consult process. For example, for 4 of the 10 physical therapy consults GAO reviewed for one VA medical center (VAMC), between 108 and 152 days elapsed with no apparent actions taken to schedule an appointment for the veteran. For 1 of these consults, several months passed before the veteran was referred for care to a non-VA health care facility. VAMC officials cited increased demand for services, and patient no-shows and cancelled appointments among the factors that lead to delays and hinder their ability to meet VHA's guideline of completing consults within 90 days of being requested. GAO's preliminary work also identified variation in how the five VAMCs reviewed have implemented key aspects of VHA's business rules, such as strategies for managing future care consults—requests for specialty care appointments that are not clinically needed for more than 90 days. Such variation may limit the usefulness of VHA's data in monitoring and overseeing consults systemwide. Furthermore, oversight of the implementation of the business rules has been limited and has not included independent verification of VAMC actions. Because of the preliminary nature of this work, GAO is not making recommendations on VHA's consult process at this time. In its December 2012 report, GAO found that VHA's outpatient medical appointment wait times were unreliable. The reliability of reported wait time performance measures was dependent in part on the consistency with which schedulers recorded desired date—defined as the date on which the patient or health care provider wants the patient to be seen—in the scheduling system. However, VHA's scheduling policy and training documents were unclear and did not ensure consistent use of the desired date. GAO also found that inconsistent implementation of VHA's scheduling policy may have resulted in increased wait times or delays in scheduling timely medical appointments. For example, GAO identified clinics that did not use the electronic wait list to track new patients in need of medical appointments as required by VHA policy, putting these patients at risk for not receiving timely care. VA concurred with the four recommendations included in the report and, in April 2014, reported continued actions to address them. For example, in response to GAO's recommendation for VA to take actions to improve the reliability of its medical appointment wait time measures, officials stated the department has implemented new patient wait time measures that no longer rely on desired date recorded by a scheduler. VHA officials stated that the department also is continuing to address GAO's three additional recommendations. Although VA has initiated actions to address GAO's recommendations, continued work is needed to ensure these actions are fully implemented in a timely fashion. Ultimately, VHA's ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care to veterans, who may have medical conditions that worsen if access is delayed.
The Omnibus Budget Reconciliation Act of 1989 (OBRA 1989) reformed the way Medicare pays for physician services in the traditional fee-for- service (FFS) program. OBRA 1989 required the establishment of a physician fee schedule and a system of spending growth targets, known as MVPS, that became effective in 1992. In 1998, the SGR system of spending targets replaced MVPS. Both spending target systems were designed to moderate growth in the volume and intensity of services provided to beneficiaries. Prior to the establishment of the fee schedule, Medicare payment rates for physician services were based on historical charges for these services. The establishment of a fee schedule was an attempt to break the link between physicians’ charges and Medicare payments. The fee schedule was not designed to reduce spending levels overall but to redistribute payments for services based on the relative resources used by physicians to provide different types of care. Under the fee schedule, Medicare pays for more than 7,000 physician services. To arrive at Medicare’s fee, the service’s relative value is multiplied by a dollar conversion factor. Currently, under SGR, the Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, uses the dollar conversion factor to calculate Medicare fees and updates the conversion factor each calendar year to account for the change in the cost of providing physician services (as measured by the Medicare Economic Index (MEI)), adjusted for the extent to which actual spending aligns with spending targets. Fee updates represent the aggregate of increases and decreases across all services; the fees for specific services may rise or fall each year. In 1980, Medicare spending for physician services totaled $7.5 billion. (See fig. 1.) By 2003, Medicare spending on these services totaled $47.9 billion. During much of this period, increases in both the volume and intensity of services physicians provided to each beneficiary were an important factor in spending growth. Before the physician fee schedule was implemented, Medicare payments for physician services were largely based on historical charges. Experience in the 1980s repeated the experience of the prior decade: the Congress froze fees or limited fee increases, but spending continued to rise. From 1980 through 1991, for example, Medicare spending per beneficiary for physician services grew at an average annual rate of about 11.6 percent. (See fig. 2.) Total Medicare spending for physician services depends on the fee paid for each service, the number of beneficiaries served, the number of services provided to each beneficiary (volume), and the mix of those services—that is, the combination of more and less expensive services (intensity). Of these factors, physicians directly influence only the volume and intensity of services provided to beneficiaries. Much of the spending growth resulted from increases in the volume and intensity of services. For example, from 1986 until 1992, physician payment rates grew by less than 2 percent annually, while the volume and intensity of services rose, on average, by almost 8 percent per year. In 1986, the Congressional Budget Office stated that “oth the price and the volume of services must be controlled to constrain costs….” In 1989, citing the need for spending targets to limit spending growth for physician services, the Secretary of Health and Human Services (HHS) testified that “Medicare physician spending has increased at compound annual rates of 16 percent over the past 10 years. And in spite of our best efforts to control volume and rein in expenditures, Medicare physician spending is currently out of control…. An expenditure target…sets an acceptable level of growth in the volume and intensity of physician services.” Annual spending growth during the 1990s was far lower than in the preceding 10 years. Beginning in 1992, the Congress introduced spending targets for physician services to help constrain the rise in Medicare spending for physician services. Unlike prior attempts to control spending, spending target systems sought to limit the growth in the volume and intensity of services each year. From 1992 until 1999, the growth in the volume and intensity of physician services per Medicare beneficiary moderated. (See fig. 3.) During this time period, the average annual increase in Medicare spending due to changes in volume and intensity of services per beneficiary was about 1 percent, in contrast with the average annual growth of about 7 percent in the period from 1985 through 1991. The moderation of volume and intensity growth slowed the rate of increase in spending on physician services. This spending grew from $25.6 billion in 1992 to $36.9 billion in 2000an average annual rate of 4.7 percent. In contrast, from 1985 through 1991, total spending increased at an average annual rate of about 10.8 percent. Beginning in 2000, the growth in volume and intensity of services per Medicare beneficiary began to rise, although the average annual rate of growth remained substantially below that experienced before spending targets were introduced. From 2000 to 2003, volume and intensity rose at an average annual rate of 5 percent. CMS actuaries project an average annual growth in volume and intensity of 3 percent from 2004 through 2013. Total spending on physician services is projected to grow by an average of 8 percent a year from 2000 through 2005. A target for spending on physician services serves as a budgetary control by automatically lowering fee updates in response to excess volume and intensity growth. Under Medicare’s SGR spending target system and its MVPS predecessor, physician fees are adjusted annually to help bring actual spending in line with spending targets. Projected increases in volume and intensity, beyond what the current SGR targets allow, are expected to contribute to annual fee reductions for several years as the system tries to align spending with targets. The SGR system evolved from the MVPS system of spending targets, which was introduced with the physician fee schedule in 1992. The goal of MVPS was to provide an incentive for physicians to reduce volume and intensity growth and thus slow the high annual rate of increase in expenditures. Under MVPS, if a year’s actual spending growth exceeded the target, future payment rates would be reduced, relative to what they would have been if actual spending had equaled the target, to offset the excess spending. If a year’s actual spending growth fell short of the target, future payment rates would be increased. Concerns about the MVPS spending target prompted the Congress to create SGR’s system of spending targets. In its 1996 report to Congress, the Physician Payment Review Commission noted that, under MVPS, physician fees would fall over time unless there were continual declines in the volume and intensity of services provided. In response to the system’s perceived shortcomings, the Congress took action in 1997 to replace it with the SGR system. The SGR system was created in the Balanced Budget Act of 1997 (BBA) and revised by the Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 (BBRA) and, most recently, by MMA. Similar to MVPS, SGR sets spending targets for physician services and updates fees to bring spending in line with those targets. Under the SGR system, if spending exceeds the target, future fee updates are reduced. If spending falls short of the target, future fee updates are increased. By adjusting fees when prior-year spending has deviated from the target, SGR attempts to moderate the growth in total Medicare outlays for physician services. Specifically, the SGR formula establishes expenditure targets as follows: from a base year—1996—the targets are updated each year to account for four factors: (1) changes in the number of Medicare beneficiaries in traditional fee-for-service; (2) growth in the costs of providing physician services, laboratory tests, and Medicare-covered outpatient prescription drugs; (3) growth in the overall economy, as measured by changes in real per capita gross domestic product (GDP); and (4) changes in expenditures that result from changes in laws or regulations. Spending and targets are estimated from data available in the fall, when CMS sets physician fees for the next calendar year. Because SGR spending targets are cumulative, the target set for a specific year is affected by the targets set in all prior years. BBRA required CMS, in calculating each year’s SGR spending target and fee update, to revise the targets set for the two previous years using the most recent available data. SGR differs from MVPS in two key ways. The first relates to volume and intensity growth limits. MVPS relied, in part, on historical trends in volume and intensity growth to set new targets each year, whereas SGR ties allowable volume and intensity increases to the growth in real GDP per capita. Under SGR, real spending per beneficiary—that is, spending adjusted for the underlying cost of providing physician services—is allowed to grow at the same rate that the national economy grows over time on a per-capita basis—currently projected to be about 2 percent annually. If volume and intensity grow faster, the annual increase in physician fees will be less than the estimated increase in the cost of providing services. Conversely, if volume and intensity grow more slowly than 2 percent annually, the SGR system permits physicians to benefit from fee increases that exceed the increased cost of providing services. To reduce the effect of business cycles on physician fees, economic growth is measured as the 10-year moving average change in real per capita GDP. This measure is projected to range from 2.1 percent to 2.5 percent during the 2005 through 2014 period. A second difference is that MVPS compared target and actual expenditures in a single year, whereas SGR compares targets and actual expenditures cumulatively from a base year. The cumulative nature of SGR’s spending targets increases the potential volatility of physician fee updates because the system requires that excess spending in any year be recouped in future years. Conceptually, this means that if spending has exceeded the SGR targets, fee updates in future years must be lowered sufficiently to offset the excess spending. Conversely, the system also requires that if spending has fallen short of the targets, fees must be increased to boost future spending. SGR limits how much fees can be adjusted when spending has missed the target. SGR’s performance adjustment may decrease fees by as much as 7 percentage points below the percentage change in MEI when spending has exceeded the target and may increase fees by as much as 3 percentage points above the percentage change in MEI when spending has fallen short of the target. SGR adjustments to the fees are determined by how much the cumulative amount of spending on physician services since 1996 differs from the cumulative spending target since that base year. Since the introduction of the fee schedule in 1992 through 2001, physicians generally experienced real increases in their fees—that is, fees increased more than the increase in the cost of providing physician services, as measured by MEI. Specifically, during that period, fees increased by 39.7 percent, whereas MEI increased by 25.9 percent. In 2002, however, SGR reduced fees by 4.8 percent, despite an estimated 2.6 percent increase in the costs of providing physician services. (See fig. 4.) SGR reduced fees in 2002 because estimated spending for physician services—cumulative since 1996—exceeded the target by approximately $8.9 billion, or 13 percent of projected 2002 spending. In part, the fee reduction occurred because CMS revised upward its estimates of previous years’ actual spending. Specifically, CMS found that its previous estimates had omitted a portion of actual spending for 1998, 1999, and 2000. In addition, in 2002 CMS lowered the 2 previous years’ spending targets based on revised GDP data from the Department of Commerce. Based on the new higher spending estimates and lower targets, CMS determined that fees had been too high in 2000 and 2001. In setting the 2002 physician fees, the SGR system reduced fees to recoup previous excess spending. The update would have been about negative 9 percent if the SGR system had not limited its decrease to 7 percentage points below MEI. Because the previous overpayments were not fully recouped in 2002, and because of volume and intensity increases, by 2003, physicians were facing several more years of fee reductions to bring cumulative Medicare spending on physician services in line with cumulative targets. However, CMS had determined that its authority to revise previous spending targets was limited. In 2002 CMS noted that the 1998 and 1999 spending targets had been based on estimated growth rates for beneficiary fee-for-service enrollment and real per capita GDP that actual experience had shown to be too low. If the estimates could have been revised, the targets for those and subsequent years would have been increased. However, at the time that CMS acknowledged these errors, the agency concluded that it was not allowed to revise these estimates. Without such revisions, the cumulative spending targets remained lower than if errors had not been made. In late 2002, the estimate of SGR called for a negative 4.4 percent fee update in 2003. With the passage of the Consolidated Appropriations Resolution of 2003, CMS determined that it was authorized to correct the 1998 and 1999 spending targets. Because SGR targets are cumulative measures, these corrections resulted in an average 1.4 percent increase in physician fees for services for 2003. In 2003, MMA averted additional fee reductions projected for 2004 and 2005 by specifying an update to physician fees of no less than 1.5 percent for 2004 and 2005. The MMA increases replaced SGR fee reductions of 4.5 percent in 2004 and an estimated 3.6 percent in 2005. Because MMA did not make corresponding revisions to SGR’s spending targets, SGR will reduce fees beginning in 2006, to offset the additional spending caused by MMA’s fee increases. In addition, recent growth in volume and intensity, which has been larger than SGR targets allow, will further compound the problem of excess spending that needs to be recouped. The 2004 Medicare Trustees Report announced that the projected physician update would be about negative 5 percent for 7 consecutive years beginning in 2006; the result is a cumulative reduction in physician fees of more than 31 percent from 2005 to 2012, while physicians’ costs of providing services, as measured by MEI, are projected to rise by 19 percent. To a large extent, the physician fee cuts projected by Medicare’s Trustees are required under SGR’s system of cumulative spending targets to make up for excess spending in earlier years. MMA added to the excess spending by specifying minimum fee updates for 2004 and 2005 without resetting the spending targets for those years. As a result, physician fee cuts were postponed, not avoided. In considering the projected fee cuts, however, it is important to recall that Congress originally established Medicare spending targets for physician services in response to runaway spending in the 1980s. The recent increase in volume and intensity growth suggests that Medicare faces a fundamental physician spending growth problem even if the SGR slate of missed spending targets were somehow wiped clean. Currently, projected Medicare spending for physician services exceeds what policymakers have specified—through the parameters of the SGR system—is the appropriate amount to spend. Because of expected increases in the volume and intensity of services provided by physicians, real spending per beneficiary is projected to grow by more than 3 percent per year. SGR, designed to promote fiscal discipline, allows such spending to grow by just over 2 percent per year. If the growth in real spending per beneficiary is not lowered through other means, SGR will mechanically reduce fee updates in an attempt to impose fiscal discipline and moderate total spending increases. Although this mechanical response may be desirable from a budgetary perspective, any consequences for physicians and their patients are uncertain. Mr. Chairman, this concludes my prepared statement. I will be happy to answer questions you or other Subcommittee Members may have. For further information regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101. James Cosgrove, Jessica Farb, Hannah Fein, and Jennifer Podulka contributed to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Sustainable Growth Rate (SGR) system, implemented in 1998 and subsequently revised, is used to update Medicare's physician fees and moderate the growth in Medicare spending for physician services. SGR, and a predecessor system implemented in 1992, were designed to reduce physician fee updates if spending growth exceeded a specified target. Although spending growth slowed substantially under both systems, concerns about SGR arose when the system caused fees to decline by 5.4 percent in 2002. GAO was asked to discuss (1) Medicare physician spending trends both before and after the implementation of spending targets and (2) the evolution and mechanics of the SGR system. This statement is largely based on GAO's previous work on Medicare spending trends and the SGR system. Medicare spending on physician services grew rapidly through the 1980s, at an average annual rate of 13.4 percent, even though physician fee increases were subject to some limits. The spending growth was driven by increases in the number of services provided to each beneficiary--referred to as volume--and an increase in the average complexity and costliness of those services--referred to as intensity. Recognizing that expenditure growth of this magnitude was not sustainable, the Congress attempted to impose fiscal discipline by establishing a system of spending targets for Medicare physician services along with a fee schedule beginning in 1992. Following the introduction of spending targets, volume and intensity growth slowed substantially during the 1990s. In recent years, under the SGR system, volume and intensity growth has increased, but not by the rates experienced during the 1980s before spending targets were in place. SGR, the current system of spending targets, evolved from the target system that went into effect in 1992. Under the SGR system, physician fees are adjusted up or down, depending on whether actual spending has fallen below or has exceeded the target. Fees increase at least as fast as the costs of providing physician services as long as volume and intensity growth remains below a specified rate--currently, a little more than 2 percent a year. If volume and intensity grows faster than the specified rate, SGR lowers fee increases or causes fees to fall. Physicians raised concerns about SGR when fees dropped significantly in 2002, a decline that was, in part, a correction for fees that had been set too high in prior years because of errors in forecast estimates and other data. Congressional action averted fee reductions, and projected fee reductions, for 2003 through 2005. However, beginning in 2006, fees are projected to resume falling for several years, partly to recoup the excess spending accumulated from averted cuts in previous years and partly because real per beneficiary spending on physician services is projected to grow faster than allowed under SGR. A dilemma for policymakers posed by projected fee reductions is that while SGR's automatic responses work as intended from a budgetary perspective, the consequences for physicians and their patients are uncertain.
To answer the three questions, we analyzed information provided by the District of Columbia on past, current, and projected borrowing and met with District officials to discuss the District’s debt. We also analyzed applicable laws that authorize borrowing and impose limitations on that borrowing. We obtained information on other jurisdictions debt limitations, debt ratios, and bond ratings from Moody’s Investors Service, Standard & Poor’s Corporation, and Fitch Investors Service, Inc. and discussed this information with officials of those organizations. We did not independently verify the information provided by the investor service organizations on other jurisdictions. We conducted our work from August 1994 to October 1994 in accordance with generally accepted government auditing standards. The District of Columbia provided comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section. We have incorporated agency views where appropriate. The Home Rule Act authorizes and sets limits on various types of short- and long-term debt that is backed by the full faith and credit of the District. The limits have not been revised since enactment of the provisions in the Home Rule Act that authorized the various types of debt. The District also is authorized to issue revenue bonds that are not backed by the full faith and credit of the District. Finally, the District is authorized to borrow from the U.S. Treasury to meet the District’s general expenses. The District can issue Tax Revenue Anticipation Notes (TRANS) to compensate for expected cash shortfalls related to delays in receipt of projected tax revenue. Although TRANS are renewable, they must be repaid no later than the last day of the fiscal year in which the notes were issued. The total amount of outstanding TRANS at any time is limited to 20 percent of the District’s total anticipated revenue for the fiscal year. The amount of TRANS borrowing is discussed later in this letter. The District can also issue short-term general obligation notes to meet appropriation requirements when budgeted grants and private contributions are not realized. Similar to TRANS, these notes are renewable. However, they must be repaid no later than the last day of the fiscal year following the year in which they were issued. The amount of general obligation notes issued during a fiscal year is limited to 2 percent of the District’s total appropriations or approximately $70 million for fiscal year 1994. District officials said they have never issued this type of note. In lieu of these short-term borrowing vehicles, the District has borrowed moneys from its capital projects fund. The District’s annual appropriation specifically states that “the Mayor shall not expend any moneys borrowed for capital projects for operating expenses of the District of Columbia government.” The District’s Corporation Counsel has concluded that the District does not violate the appropriation act restriction as long as borrowings from the Capital Projects Fund are repaid before the end of the fiscal year in which the borrowing is made. The District borrowed $140 million from the Capital Projects Fund in fiscal year 1993 to finance seasonal cash flow needs. These funds were repaid before the end of the fiscal year. In fiscal year 1994, the District again borrowed from the Capital Projects Fund to compensate for cash flow shortages due to a delay in the receipt of the Federal Payment. The District borrowed $40 million in October 1993 and repaid it shortly thereafter; and, the District borrowed $40 million from the Capital Projects Fund in early September 1994 and repaid it before the end of the month. The District also has the authority to issue long-term debt. Long-term debt generally takes the form of general obligation bonds. The District can issue general obligation bonds to refund (that is, refinance) existing debt or to finance capital projects. In fiscal year 1991, the District also received authority to eliminate the general fund’s existing accumulated deficit by issuing general obligation bonds. The Home Rule Act restricts the District from issuing long-term general obligation bonds if total debt service in a fiscal year will exceed 14 percent of the District’s estimated revenues for the year the bonds are issued. This debt limitation is calculated by the Treasurer’s Office when the District requests new general obligation borrowing. At that time, the highest projected cost of debt service (including debt service from both general obligation bonds as well as long-term U.S. Treasury debt) for any fiscal year including debt service on the projected borrowing cannot exceed 14 percent of the District’s estimated revenues for the fiscal year the bonds will be issued. Revenues for this calculation do not include court fees and any fees or revenues directed to servicing revenue bonds, retirement contributions, revenues from retirement systems, revenues from Treasury loans, and the sale of general obligation or revenue bonds. This debt service calculation does not include refinancing costs of previous bonds and any obligations associated with the Redevelopment Land Agency, the National Capital Housing Authority, or obligations pursuant to the authority contained in the District of Columbia Stadium Act of 1957. The specific amount of long-term borrowing is discussed later in this letter. The District may also borrow funds from the U.S. Treasury to finance its general expenses. Between 1939 and 1983, the District routinely borrowed from the U.S. Treasury under this provision. It has not borrowed from the U.S. Treasury since then. Under this provision, which originated before the enactment of the Home Rule Act, the Mayor of the District of Columbia may requisition the Secretary of the Treasury for “such sums as may be necessary, from time to time, to meet the general expenses of said District, as authorized by Congress, and such amounts so advanced shall be reimbursed by the said Mayor to the Treasury out of taxes and revenue collected for the support of the government of the said District of Columbia.” The interest rate to be applied, if any, and the term of these borrowings are not specified. These borrowings are not subject to the 14-percent limitation on long-term debt. All general obligation and TRANS offering documents refer to this section of the D.C. Code and specify that the Mayor shall request funds from the U.S. Treasury as may be necessary to pay the principal and interest on the bonds when due. In addition, the District previously had authority to borrow funds from the U.S. Treasury to finance capital projects. While the authority for new U.S. Treasury borrowing for capital projects was terminated by 1983, the District had $71.8 million and the District’s Water and Sewer Authority had $15.1 million outstanding debt issued under this authority at September 30, 1994. The District’s $71.8 million U.S. Treasury debt is scheduled to be repaid by 2003, and the Water and Sewer Authority’s U.S. Treasury debt is scheduled to be repaid by 2014. As previously noted, the debt service on U.S. Treasury long-term capital projects debt is included in the 14-percent limitation calculation described in the previous section of this report under long-term borrowing. The District of Columbia also is authorized to issue revenue bonds, notes, or other obligations to finance or refinance undertakings in the areas of (1) housing, (2) facilities for health, transit, utility, recreation, college, university, or pollution control, (3) college or university student loan programs, and (4) industrial and commercial development. Such revenue obligations are not general obligations or debt of the District backed by the full faith and credit or the taxing power of the District. Instead, they are payable from earnings of the respective projects and may be secured by mortgages on real property or creation of a security interest in other assets. In addition to the current types of financing allowed, the District is proposing to partly finance the construction of a new convention center and sports arena by authorizing District enterprises to issue revenue bonds that would include as security a pledge of dedicated taxes. This proposed method of financing requires amending the Home Rule Act. Specifically, the Home Rule Act would need to be revised to authorize the District (1) to issue such revenue bonds and (2) to delegate authority to District enterprises to issue the bonds and to receive and expend the dedicated revenues. Under this proposal these bonds would not be subject to the 14-percent long-term debt service ceiling. The District’s primary borrowing involves short-term tax revenue anticipation notes and long-term general obligation bonds. In May 1994 the District borrowed $200 million in short-term Tax Revenue Anticipation Notes which were paid in September by the end of fiscal year 1994. The District anticipates that in fiscal year 1995 it will need $250 million in short-term Tax Revenue Anticipation Notes ($125 million in February 1995 and $125 million in June 1995). These notes will be due in September 1995. The fiscal year 1994 short-term debt represented 6.0 percent of total anticipated revenues, and the fiscal year 1995 short-term debt represented 7.2 percent of total anticipated revenues—considerably below the 20-percent limitation on TRANS borrowing discussed earlier in this report. The District does not make short-term borrowing projections beyond the year for which a budget has been submitted to the District of Columbia Council. Also on September 30, 1994, the District had $3.65 billion in long-term debt (both general obligation and U.S. Treasury debt). As calculated by the District Treasurer’s Office, total debt service for these long-term obligations is expected to be $409 million in fiscal year 1998, the highest projected year, or 11.42 percent of total expected fiscal year 1994 revenues. This debt service percent is projected to grow in the future. The Budget Office estimates that capital borrowing will be $250 million annually from fiscal years 1995 through 1998 and $190 million in each of fiscal years 1999 and 2000. Based on these estimates of future general obligation borrowing and Budget Office projections of revenues, we estimate that the debt service percent will rise to 13.84 percent by 2000. Figure 1 shows the debt service percents for fiscal years 1989 through 2000. More details of the assumptions and calculations for the data contained in figure 1 are provided in appendix I. When analyzing the projected debt service percents, two additional factors need to be considered. First, the projections in the chart are based on revenue estimates. As we noted in our June 1994 report, the District has overestimated some revenues in the past. Lower than expected revenues would increase the debt service percent. And, as discussed in our June 1994 report, the District plans to limit general obligation bond borrowing below what is needed for capital projects. Capital funds requirements are particularly significant for the D.C. Public Schools and the Water and Sewer Authority. Various debt indicators are available to compare debt characteristics of jurisdictions. For example, investment services provide ratings of bonds which assess the amount of risk associated with borrowing. Moody’s Investors Service ratings range from Aaa (best quality) to C (lowest quality). Tables 1 and 2 show the long-term bond ratings of the 20 largest cities and all 50 states. As can be seen from tables 1 and 2, the District’s bond rating of Baa is lower than any state and all but two of the cities listed. According to Moody’s Investors Service analysts, this rating reflects the District’s long history of financial pressures and budget-balancing difficulties. This rating has been the same since the District first issued general obligation bonds in 1984. Other comparisons of the District’s debt with other jurisdictions are more problematic. For example, comparing the District’s 14-percent debt service limitation with other jurisdictions is difficult because the type of limits vary. In fact, a Moody’s Investors Service official told us that no state imposes a limit on cities and counties based on a percentage of total revenues like the District’s limitation. A May 1994 Moody’s Investors Service report that outlined debt restrictions imposed on cities and counties by their states notes that almost all states imposed general obligation debt volume limits on cities and counties. In most states the limit is based on a percentage of the local jurisdictions’s taxable real property. Although the limits imposed by states on cities and counties most often are related to property values, the way these limits are calculated varies widely. For example, some limits are based on the full property value and others on an adjusted value. In other states, limits exclude some types of borrowing; for example, enterprise fund borrowing or public school borrowing. Because the limits of other jurisdictions vary widely, we did not specifically determine how close other jurisdictions are to their legal limits. Other indices are routinely used by investment services to compare the extent of borrowing among jurisdictions. Two of these indices are per capita debt and the amount of debt compared with the value of real property subject to tax. Table 3 shows the median, high, and low values for these indices for various population categories of cities, counties, and states. The District of Columbia’s overall net debt per capita was $6,315, and the ratio of net debt to taxable real property was 8.1 percent. Although both ratios are high when compared with other cities, counties, or states, comparing the District’s debt limitations and amount of debt with other jurisdictions is problematic because of the unique nature of the District. For example, debt limits for state, counties, and cities would only affect the debts incurred to finance the functions carried out by the specific jurisdiction. In contrast, the District has a single debt limit applicable to carry out all its governmental functions, whether these functions are representative of those carried out by states, counties, or cities. Thus, comparing the District’s ratios to state, county, or city debt ratios may not be a meaningful comparison to the extent the District’s debt is used to finance functions that may overlap functions financed by state, county, and city debt. For example, the District’s debt would include debt related to typical city functions (for example, police and fire protection) as well as debt associated with typical state and county functions (for example, motor vehicle and driver licensing). District officials also pointed out that other unique factors make comparing the District to other jurisdictions difficult. They noted that unlike most cities and counties, sales and income tax provide a substantial portion of the District’s tax revenue. Therefore, the debt/taxable real property ratio is not meaningful to compare the District to other jurisdictions. As discussed earlier in this letter, the general obligation debt limitation calculations are made by the Treasurer’s Office at the time the District issues new general obligation bonds and are included in the bond prospectus. These debt service percent calculations are done in accordance with the methodology outlined in the Home Rule Act as described earlier in this letter. Information on the debt service percent is also included in the District’s multi-year plans and annual financial reports, but this information is not consistent with the Home Rule Act methodology. The estimates in the multi-year plans do not use the methodology required by the Home Rule Act, which is the same methodology that is used to determine whether the District may issue new general obligation bonds.Instead of calculating the debt service percent by using the highest fiscal year debt service divided by the estimated revenue for the fiscal year the bonds will be issued—as is done in the bond prospectus—the debt service percent information in the multi-year plan is calculated by dividing the current year debt service by the current year revenue. The result is that the debt service percent information contained in the multi-year plans is less than what the percent would be if the Home Rule Act methodology were used. For example, for fiscal year 1994, information in the bond offering documents indicated that the debt service percent was 11.42 percent, while the debt service percent included in the multi-year plan for fiscal year 1994 was 10.14 percent. Although the fiscal year 1994 revenue estimates for each calculation were slightly different, the primary reason for the difference was the amount of debt service. The bond offering documents used the highest debt service for any fiscal year ($409.1 million which will occur in fiscal year 1998) and the multi-year plan used the fiscal year 1994 debt service of $362.2 million. The Home Rule Act not only specifies the methodology to be used in the multi-year plan, but the information in the current multi-year plans understates the debt service percentage in relation to the District’s debt limit. District managers need accurate information as they outline the various options needed to deal with the financial crisis. Table 4 outlines the differences between the debt service percents contained in the multi-year plan and our calculations using the Home Rule Act methodology. Information on the debt service percent contained in the annual financial reports also is not consistent with information in the bond offering documents. The methodology for calculating the debt service percentage in the financial statements is not specified in law. Like the debt service percent information contained in the multi-year plan, the debt service percent included in the financial statements is calculated by dividing fiscal year debt service by fiscal year revenues rather than the highest debt service for any fiscal year. The financial statements include the debt service information in an exhibit entitled “Computation of Legal Debt Limitation”. Even though portrayed as the legal calculation, the information is not consistent with the methodology stipulated in the Home Rule Act. Table 5 outlines the differences in the debt service percentage that are currently contained in the annual financial statements and the debt service percentage using the Home Rule Act methodology. Specifically, our calculations use the highest projected annual debt service divided by the actual annual revenues. As we noted in our June report, the District is faced with both unresolved long-term financial issues and continual short-term financial crises. The District’s high level of general obligation debt is approaching the 14-percent debt service ceiling. Although information on the debt service percent in the bond offering documents is calculated using the methodology required in the Home Rule Act, information on the debt service percent included in the District’s financial statements and multi-year plan does not use that methodology. As a result, the debt service percent amounts shown in the financial statements and multi-year plan are less than if the Home Rule Act methodology were used. As such, the current financial statements representation of the debt service percentage and the multi-year plan debt service information could mislead users of such information. Because the District’s long-term debt is approaching the legal limit, it is critical that District managers and other District stakeholders have accurate information as they make decisions about how meet the District’s financing needs. We recommend that the Mayor of the District of Columbia direct that (1) the multi-year plans contain debt service percentage information in the manner required by the Home Rule Act, and (2) the financial statements contain information on the debt service percentage that is calculated using the Home Rule Act methodology by dividing the maximum estimated annual debt service by the actual revenues. We provided a draft of this report to officials of the District of Columbia for their comment. In general, District officials agreed with the information contained in the report and supported the recommendations. They emphasized that comparing the District to other jurisdictions is difficult because of a variety of unique factors, including the high percentage of District property that is not subject to property tax. We believe the report sufficiently outlines the unique factors that make the comparison of the District to other jurisdictions problematic. District officials provided two additional reasons for the relatively low bond rating: the District’s unfunded pension liability and lack of state sovereignty. They also noted that part of the reason the District is approaching the debt limit was the issuance of $331 million in 12-year bonds in 1991 to eliminate a deficit in the General Fund. They explained that these bonds were of shorter duration than typical general obligation bonds that are used to finance capital projects and that this increased the amount of debt service. We agree that these bonds contributed to the debt service, but regardless of the term or the purpose of the bonds the fact remains that the amount of District debt is nearing the legal limit. District officials also said they are considering reducing the amount of future borrowing for capital needs. Although reduced future borrowing would lower the debt service percentage, such reductions need to be weighed against the District’s major capital needs. District officials also made some technical comments which have been incorporated in the report where appropriate. We are sending copies of this report to the Mayor of the District of Columbia; the Chairman of the City Council; the Chairmen and Ranking Minority Members of the Subcommittee on the District of Columbia, Senate Committee on Appropriations, and the Senate Committee on Governmental Affairs; interested congressional committees; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-8549 if you or your staffs have any questions concerning this report. Major contributors to this report are listed in appendix II. The year of maximum debt service for borrowing year is in bold print. Amount(millions) Rate(percent) Richard T. Cambosos, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the District of Columbia's debt, focusing on: (1) the types of debt that are available to the District and the limitations on that debt; (2) the total District debt and its estimated future borrowings; and (3) how the District's debt compares with other jurisdictions' debt. GAO found that: (1) the Home Rule Act authorizes and limits the District's short- and long-term debt; (2) the District's debt limits have not been revised since enactment of the Home Rule Act; (3) the District is authorized to issue revenue bonds to finance capital projects that are not backed by the full faith and credit of the District and to borrow from the U.S. Treasury to meet its general expenses; (4) as of September 30, 1994, the District's long-term general obligation debt totalled $3.65 billion; (5) total debt service for these long-term obligations is expected to be $409 million in fiscal year 1998; (6) by 2000, the District's projected debt service percentage is expected to be close to the statutory limitation of 14 percent; (7) as of September 30, 1994, the District's general obligation bond rating was at its lowest and was below that of most states and large cities; (8) the District's bond rating has not changed since it began issuing general obligation bonds in 1984; (9) the District's debt level is higher than other jurisdictions, in terms of debt per capita and debt as a percentage of real property; (10) the District's debt cannot be adequately compared with other jurisdictions because its service responsibilities include both city and state governments; and (11) debt service calculations in the District's multiyear plans and annual financial statements are not consistent with the methodology required by the Home Rule Act.
In November 1996, Maine voters approved a citizen’s initiative—the Maine Clean Election Act (Maine’s Act)—establishing a full public financing program to fund with public moneys the campaigns of participating candidates for the state legislature and governor. Similarly, in November 1998, Arizona voters passed the Citizens Clean Elections Act (Arizona’s Act), establishing a full public financing program for participating candidates for the state legislature and various statewide offices, such as governor or secretary of state. Maine’s Commission on Governmental Ethics and Election Practices and Arizona’s Citizens Clean Elections Commission administer the respective state’s public financing program, including certifying that candidates have met qualifications for receiving public funds. Legislative candidates who wish to participate in the respective public financing programs must be certified as a participating candidate. Certified candidates, among other things must (1) forgo self-financing and all private contributions, except for a limited amount of “seed money” prior to certification, and (2) demonstrate citizen support by collecting a minimum number of $5 contributions from registered voters. After being certified by the state as having met qualifying requirements, participating candidates receive initial distributions (predetermined amounts) of public funding and are also eligible for additional matching funds from public moneys based on spending by or for privately funded opponents. These matching funds, up to predetermined limits, are given to participating candidates when an opposing nonparticipating candidate exceeds the initial distribution of funds provided to the participating candidate during the primary or general election. Table 1 shows the public funding available to each participating candidate in the 2008 election cycle in Maine and Arizona. The calculation to assess whether matching funds for participating candidates are triggered is to include reported independent expenditures that, in general, are made on behalf of a nonparticipating or another participating candidate in the race by individuals, corporations, political action committees, or other groups. Various revenue sources are used to support the public financing programs. In Maine, state appropriations were the largest funding source, contributing 82 percent of total revenue in 2008. In Arizona, a surcharge on civil and criminal fines and penalties was the largest funding source, accounting for 59 percent of total revenue in 2008. In addition, funding for public financing programs comes from state income tax checkoff donations in both states. During the 2008 primary and general elections, participating legislative candidates in Maine received a total of almost $3 million, and participating legislative candidates in Arizona received a total of about $6 million. Before the passage of Maine’s Act in 1996 and Arizona’s Act in 1998, political campaigns in the two states were financed completely with private funds. There were no limitations placed on expenditures by candidates from their personal wealth. Under Maine’s and Arizona’s public financing laws, nonparticipating candidates are not limited in the amount they may spend from their personal financial resources on their own campaigns. While not faced with limits on the total amount of money that they can raise or spend, nonparticipating candidates are subject to certain limitations on the amount that an individual, corporation, or political committee can contribute to the campaigns of nonparticipating candidates, and nonparticipating candidates have additional reporting requirements. For example, in Maine, a nonparticipating candidate in the 2008 legislative elections could accept individual contributions of up to $250 per election, and in Arizona, a nonparticipating candidate could accept individual contributions of up to $488 per election. In both states, nonparticipating candidates must file certain reports with the state when their campaigns exceed certain statutory thresholds relating to, for example, expenditures or contributions. Appendix II provides information about the design and implementation of Maine’s and Arizona’s public financing programs. While there is widespread agreement among researchers and state officials in Maine and Arizona with the goals of the public financing programs, there is little consensus about how to assess progress toward these goals and the effects of these programs. For example, research on the effects of state public financing programs in general has been limited because the programs vary widely and were implemented at different times, hindering comparability. With regard to Maine’s and Arizona’s public financing programs, research tends to be limited to a single state or a limited number of years, or produced by groups that support or oppose public financing. Thus, in revisiting our 2003 report, we describe the five goals of the public financing programs and include a discussion of proponents’ and opponents’ views on the effects of these programs. One goal of the public financing programs in Maine and Arizona was to increase electoral competition, which refers to the level of competition for elected positions as demonstrated by whether races were contested (that is, involved more candidates than available positions) and by the percentage of the vote candidates received. For example, levels of electoral competition can vary from none at all in the case of an uncontested race, in which the sole candidate receives 100 percent of the vote (less any write-in votes), to an election in which several candidates vie competitively for a position, each winning a significant portion of the votes. Proponents of public financing for campaigns contended that public funding could increase electoral competition by allowing candidates, especially candidates challenging incumbents, to overcome the financial hurdles that would otherwise prevent them from entering a race. Further, proponents argued that public financing promotes competition by giving more candidates the opportunity to effectively communicate with the electorate once they have entered the race. Additionally, some proponents asserted that increasing the pool of challengers would also increase the diversity of the candidate pool and consequently make some races more competitive by offering candidates that appeal to a broader range of voters. On the other hand, opponents asserted that public financing does not necessarily attract candidates who have a broad base of constituency support and therefore, even though more new candidates may enter races and win, the quality of representation these candidates offer may be questionable. Increasing voter choice, as measured by changes in the number of candidates per race and changes in the breadth of political party affiliations, such as third-party and independent candidates, represented in races, was a goal of public financing programs. Proponents of the public financing programs in Maine and Arizona contended that public funding of campaigns would encourage more individuals to run for office, thereby giving voters more choices on the ballot. Opponents asserted that an increase in the number of candidates on the ballot alone would not necessarily result in a more diverse selection of candidates, representation of a wider range of political views, or the guarantee that a broader array of issues would be debated in campaigns. The public financing programs in Maine and Arizona each were designed to have a two-pronged approach for the third goal—curbing increases in the costs of campaign spending. Each program imposed spending limits and certain other requirements on candidates who chose to participate in the public financing program, and reduced the total amount of money that nonparticipating candidates were allowed to accept from each campaign contributor. Proponents of the public financing programs in Maine and Arizona contended that escalating campaign costs helped deter candidates from running for office. The intended outcome of this approach was to lower the cost of running for office by reducing and capping the amount of money available for campaign spending. Opponents argued that worthy candidates will garner public support and therefore do not need public financing to run their campaigns. Opponents also cited concerns that rising campaign costs are overstated and that most campaign fundraising comes from individuals who give less than the legal limit. A fourth goal of the public financing programs in Maine and Arizona was to enhance the confidence of citizens in government by reducing the influence of interest groups in the political process. The public financing programs in Maine and Arizona imposed campaign contribution limits on participating candidates and reduced the need for participating candidates to raise funds from private donors, such as interest groups, with the intent of eliminating any undue influence, or the perception of influence, large campaign contributors may have on participating candidates. For instance, the “findings and declarations” section of Arizona’s 1998 Act stated, among other things, that the then current election-financing system “effectively suppresses the voices and influence of the vast majority of Arizona citizens in favor of a small number of wealthy special interests” and “undermines public confidence in the integrity of public officials.” From an overall perspective, proponents asserted that public financing programs should enhance the confidence of citizens in government by increasing the integrity of the political process and the accountability of officials. On the other hand, opponents asserted that, under the traditional campaign financing system, the voices of citizens are represented through competing interest groups. Opponents further asserted there is no evidence that government-financed campaigns attract more worthy candidates than do the traditional system or that, once elected, the publicly financed candidates vote any differently as legislators than do traditionally financed candidates. Moreover, some opponents argued that interest groups can still assert influence on the political process through means other than contributing directly to candidates’ political campaigns, such as contributions to political parties, independent expenditures on behalf of or for opposing candidates, as well as providing nonfinancial resources such as mobilizing members to volunteer for grassroots activities. Increasing voter participation, as indicated by increases in voter turnout, was the fifth goal of public financing programs in Maine and Arizona. Proponents asserted that public financing increases voter participation by encouraging citizens to become involved in the political process and by increasing electoral competition. Proponents contended that the public financing programs increase communication between candidates and voters and encourage participating candidates or volunteers to go door-to- door to meet with voters and to collect $5 qualifying contributions. As a result, citizens would feel more involved in the political process and would be more likely to vote in legislative elections. Further, proponents argued that increased competition resulting from public financing would also increase voter turnout because more voters would be attracted by a more diverse set of candidates. Opponents stated that research on public financing programs and their effect on voter turnout is limited or anecdotal, and there is no evidence that citizens will become more engaged in the political process and be more likely to vote. Further, opponents cited the declining number of taxpayers who voluntarily provide contributions to the presidential and state public financing programs on their income tax forms as a reflection of the public’s waning participation and support. Since the 1970s, states and localities have offered a variety of programs providing public funds directly to candidates’ campaigns for statewide and legislative races. A July 2009 Congressional Research Service report identified 16 states offering direct public funding to candidates using two major types of public financing frameworks. According to this report, 10 of these states offered public financing programs that were primarily designed to match candidates’ private campaign contributions, thereby reducing the need for private fundraising. These programs varied widely, but generally the amount of public funds candidates received in this type of program depended on the amount the candidates raised and provided partial funding for candidates’ campaigns. Seven of these 16 states, including Maine and Arizona, offered full public financing programs for certain offices that provided fixed subsidies to candidates once they met basic qualifications. During the 2007 and 2008 election cycle, these 7 states offered full public financing programs for candidates running for those statewide and legislative offices shown in table 2. Appendix III describes full public financing programs available in the 2007 and 2008 legislative elections in the two states other than Maine and Arizona that offer them—Connecticut and New Jersey. In nearly every session since 1956, Congress has considered legislation for public financing of congressional elections, although no law has been enacted. There are several bills pending in the current 111th session of Congress addressing public financing of congressional elections. Two of these are companion bills (H.R. 1826 and S.752) respectively addressing elections to the House of Representatives and the Senate by proposing voluntary public funding systems with a mix of predetermined funding amounts, matching funds, and vouchers for the purchase of airtime on broadcast stations for political advertisements. Two other bills propose variations for funding House elections—H.R. 2056 proposes a voluntary public funding system for House elections, and H.R. 158 proposes a grant system to exclusively fully fund House campaigns during general elections. In July 2009 the House Administration Committee held hearings on H.R. 1826. These bills were referred to committees in 2009 and as of April 2010 were pending. Many factors, such as the popularity and experience of the candidates, can influence the competitiveness and outcomes of elections and the interpretation of the effects of public financing programs. For example, term limits—limits on the number of terms elected officials such as legislators can serve—and redistricting—the redrawing of state electoral boundaries such as those for legislative districts in response to the decennial census—are factors that complicate the interpretation of available data. Other factors not directly related to public or private financing can also affect electoral campaigns and outcomes, such as economic conditions or particularly controversial ballot initiatives. In Maine and Arizona, legislative candidates’ participation in the public financing programs (measured by the percentage of candidates participating and the proportion of races with a participating candidate) increased from 2000 to 2008; although limited data on candidates’ characteristics are available. Specifically, Maine candidates’ participation rates more than doubled in the primary and general elections from 2000 to 2004 and remained high through 2008 (over 70 percent); among incumbents, the majority participated from 2002 through 2008; and more Democrats (rather than Republicans) participated. In Maine, participating candidates were more likely to win their races. In Arizona, candidates’ participation rates more than doubled in the primary and general elections from 2000 to 2008, with higher percentages of challengers (rather than incumbents) and Democrats (rather than Republicans) participating. In Arizona, nonparticipating candidates were more likely to win their races than were participating candidates. Other than incumbency status and political party affiliation, states did not maintain data that would allow us to assess candidates’ characteristics, such as their experience or demographic characteristics. The participation rate of legislative candidates (i.e., percentage of legislative candidates participating in the public financing program) in Maine’s primary elections more than doubled in the first three election cycles after public financing became available. As shown in figure 1, the participation rate increased from 32 percent in 2000 to 72 percent in 2004 and remained over 70 percent from 2004 to 2008. Similarly, the participation rate of legislative candidates in Maine’s general elections more than doubled from 33 percent in 2000 to 79 percent in 2004 and then remained over 80 percent for the 2006 and 2008 elections. When asked the main reasons for choosing to run their campaign with or without public funds in the 2008 election, the 11 candidates we interviewed in Maine offered a range of reasons why they chose to participate or not participate in the public financing program. Five of the 6 participating candidates cited difficulties associated with raising enough private funds to run a competitive campaign. Among the difficulties mentioned were the amount of time and energy required to fundraise, as well as the amount of funds needed to compete with a well-financed opponent. In addition, 4 of the 6 participating candidates said that participating in the public financing program allowed them to spend more time focusing on communicating with voters. For example, one candidate said that participating in the public financing program freed him up so he could focus on meeting with constituents and learning what issues were important to them, rather than having to spend his time asking for money. Further, 3 of the 6 participating candidates said that they wanted to be free of the influence of interest groups or other campaign contributors, and 2 of these candidates felt that it was strategically advantageous to participate in the public financing program. One of these candidates explained that he did not want to have to spend time raising funds while his opponent could use the time to campaign and still receive the same amount of money. We also asked candidates about specific factors they may have considered when choosing to run their campaign with public funds. Table 3 presents the number of participating candidates who said that they had considered each of the following factors when they decided to participate in the public financing program. The 5 nonparticipating candidates we interviewed in Maine most frequently mentioned opposition to using public funds for election campaigns as one of the main reasons they chose not to participate in the public financing program in 2008. For example, 4 of the 5 nonparticipating candidates said they were opposed to public financing of elections for a range of reasons, including concern over the state’s fiscal situation. One nonparticipating candidate said he chose not to participate because he did not want restrictions on how he ran his campaign. He explained that he had more flexibility with private funds and could donate excess campaign funds to nonprofit organizations after the election. In addition, one candidate told us that he was not opposed to the public financing program, but did not participate because he did not intend to run a campaign and anticipated that another candidate would take his place before the general election. We also asked the 5 nonparticipating candidates if they considered any of the factors listed in table 4 when they chose not to participate in the public financing program; their responses appear alongside each factor in the table. Incumbent candidates’ participation in the public financing program in general elections in Maine generally increased from 2000 to 2008, with the majority of incumbent candidates participating in the program from 2002 through 2008. As shown in figure 2, participating incumbent candidates, as a percentage of all candidates, increased from 10 percent in 2000 to 29 percent in 2008. Further, the percentage of participating incumbents grew from 27 percent of incumbent candidates in 2000 to 80 percent of incumbent candidates in 2008. Participating incumbents and challengers in Maine’s legislative races were generally slightly more likely to win than nonparticipating incumbents and challengers who ran in general elections held from 2000 through 2008, as shown in table 5. Since 2000, more Democrats than Republicans participated in the public financing program in Maine primary and general elections, in terms of the proportion of candidates who participated. For example, while the rate at which Republican legislative candidates in the primary elections participated in the public financing program increased by about 41 percentage points from 2000 to 2008 (from 22 percent to 63 percent), the participation rate remains below that of Democrats, whose participation rate increased by about 48 percentage points in the primary election during the same period (from 39 percent to 87 percent), as shown in figure 3. For both Democrats and Republicans, most of the growth in participation rates occurred between the 2000 and 2004 legislative elections, whereas participation rates have been relatively stable over the past three election cycles (2004, 2006, and 2008) in both the primary and general elections. For example, participation rates increased in the primary elections by about 4 percentage points among Democrats (from 83 percent to 87 percent), and by 1 percentage point among Republicans (from 62 percent to 63 percent) between the 2004 and 2008 election cycles. In all election years, more Democrats participated in the public financing program than Republicans did, in terms of the proportion of candidates who participated. In Arizona, the participation rate of legislative candidates in primary elections doubled after the first election cycle when public financing became available, from 24 percent in 2000 to 50 percent in 2002. The participation rate then steadily increased over the next three elections to 59 percent in 2008, as shown in figure 4. Similarly, the participation rate of legislative candidates in Arizona’s general elections almost doubled after 2000, when it was 26 percent, to 49 percent in 2002, and then steadily increased over the next three elections to 64 percent in 2008. The 11 candidates we interviewed in Arizona offered a range of reasons why they chose to participate or not participate in the public financing program when asked the main reasons for choosing to run their campaign with or without public funds in the 2008 election. Four of the 5 participating candidates we interviewed cited wanting more time to focus on interaction with voters when asked the main reasons for choosing to run their campaign with public funds in the 2008 elections. One of these candidates explained that collecting the $5 contributions strengthens candidates’ connections to voters at the grass roots level. Candidates cited other reasons for participation. The desire to be free of the influence of interest groups or other campaign contributors was among the reasons 3 of the 5 candidates gave for participating in the public financing program. One candidate explained that participating candidates are not reliant on interest groups and are only beholden to their constituents. Three candidates said difficulties associated with raising adequate private funds to run a competitive election campaign was one of the reasons they chose to participate. For example, one candidate said that as a first-time candidate, he did not know how to raise money, so without the public financing program he would not have been able to compete against the incumbent candidate. Two candidates said it was strategically advantageous to participate in the public financing program. One of these candidates told us that he decided to participate in the public financing program because he would feel like he was funding his opponents if he raised private funds and the participating candidates in his race received matching funds based upon his spending. We also asked candidates about specific factors they may have considered when choosing to run their campaigns with public funds. Table 6 presents the number of participating candidates who said that they had considered each of the following factors when they decided to participate in the public financing program. The 6 nonparticipating candidates we interviewed most frequently cited opposition to using public funds for election campaigns as one of the main reasons they chose to use private rather than public funds for their campaigns. Five of the 6 nonparticipating candidates said that they were opposed to using public funds for election campaigns for various reasons, which included the belief that public financing program forces taxpayers to fund candidates that they may not support, and the belief that funds could be better spent on government services, such as healthcare for children, or to reduce the state’s deficit. In addition, 2 candidates said they did not participate because they did not want restrictions on how they ran their campaigns, such as the limit on the amount of money candidates may raise. Another candidate told us that he is opposed to the public financing program because he does not believe that the Citizens Clean Elections Commission should have the authority to remove legislators from office for violating the rules of the public financing program. Additionally, 1 nonparticipating candidate said that she did not participate because her primary race was uncontested, so the public financing program would provide meager resources and not enough for her to communicate with voters. We also asked the 6 nonparticipating candidates if they considered any of the factors listed in table 7 when they chose not to participate in the public financing program; their responses appear alongside each factor in the table. Incumbent candidates’ participation in the public financing program in general elections in Arizona increased from 2000 to 2008; however, the majority of incumbent candidates did not participate in the program over these five election cycles. Figure 5 shows that participating incumbent candidates, as a percentage of all candidates, generally increased from 4 percent in 2000 to 18 percent in the 2008 general elections. Nonparticipating legislative incumbents and challengers in Arizona were generally more likely to win than participating incumbents and challengers who ran in elections held from 2000 through 2008, as shown in table 8. In Arizona primary and general legislative elections, more Democrats than Republicans participated in the public financing program, in terms of the proportion of candidates who participated, although, as shown in figure 6, the participation gap between Democrats and Republicans has narrowed since 2000. For example, the percentage of Democrats who participated in the public financing program during the primary election increased by about 30 percentage points (from 42 percent to 72 percent) from 2000 to 2008, while the rate of participation among Republican candidates increased by about 41 percentage points (from 9 percent to 50 percent) over the same period. The majority of general election races in both Maine and Arizona had at least one participating candidate in 2008, and the proportion of races with a participating candidate has generally increased from 2000 through 2008 in both states. In Maine, the proportion of races with at least one participating candidate doubled over the five election cycles, from 47 percent in 2000 to 96 percent in 2008, as shown in figure 7. In Arizona, the proportion of races with at least one participating candidate increased steadily over the five election cycles from 53 percent in 2000 to 82 percent in 2008. Data limitations preclude providing additional information about legislative candidates or the districts in which they ran for office. For example, Maine and Arizona state officials did not maintain data to analyze candidates’ experience (e.g., whether they had previously held public office with the exception of whether a candidate was an incumbent in a given election and political party affiliation); qualifications (e.g., education or work experience); wealth; or demographics (e.g., sex, age, race, or ethnicity). Additionally, data were not available to address issues specific to individual legislative districts, such as partisan composition, local ballot initiatives and candidates, as well as economic or demographic factors that could affect a candidate’s participation in the public financing programs. We used a variety of statistical techniques to measure changes in five goals of public financing before and after the implementation of public financing and found some evidence of statistically significant changes in one measure of electoral competition. For the rest, we found either no overall changes or data limitations precluded any analysis of changes. Specifically, there were differences in one of the measures used for the goal of increased electoral competition—the winners’ margin of victory decreased—but we could not attribute these differences directly to the public financing programs because needed data were limited or unavailable and there are certain factors that we could not measure, such as candidate popularity, which affect electoral outcomes. There were no statistically significant differences observed for the other measures of electoral competition: contestedness (number of candidates per race) and incumbent reelection rates. For three of the remaining four goals— increasing voter choice, curbing increases in campaign spending, and reducing the influence of interest groups and enhancing citizens’ confidence in government—the measurable differences were not statistically significant overall. While there is no indication that the programs have decreased interest group influence, some candidates and interest group officials GAO interviewed said that campaign tactics have changed. We could not measure differences for the fifth goal—increasing voter participation—because of data limitations, including differences in how voter turnout has been measured over time for Maine and Arizona. Overall, the margin of victory in legislative races decreased significantly in both Maine and Arizona compared to their respective comparison states after the public financing programs were implemented; however, we could not attribute these decreases directly to the public financing programs due to factors such as candidate popularity and changing economic conditions, which affect electoral outcomes. On the other hand, contestedness and incumbent reelection rates did not significantly change over time in Maine and Arizona. The candidates and interest group representatives we interviewed from Maine and Arizona provided various perspectives on the effect of the public financing programs on the advantage of incumbent candidates and the number of close races. Overall, winner’s margin of victory in races decreased significantly in both Maine and Arizona as compared to their respective comparison states after public financing was available; however, we could not attribute these decreases to the public financing programs due to factors such as the qualifications or experience of the candidates and Presidential and othe top-ballot races, which could motivate certain citizens to vote, thereby influencing electoral outcomes. We used three different measures of margin of victory in our analyses: (1) the average margin of victory for contested races, (2) the percentage of close races (i.e., races decided by less than 10 percentage points), and (3) the percentage of races that were landslides (i.e., races decided by more than 20 percentage points). As shown in tables 9 and 10, the average margin of victory for contested elections declined from 22 percent before public financing (1996 and 1998) to 19 percent after public financing (from 2000 through 2008) in Maine, and from 31 percent before public financing to 27 percent after public financing in Arizona. These changes, decreases of about 3 percent in Maine and 4 percent in Arizona, were statistically significantly different from the changes in the comparison states for both Maine and Arizona, where the average margin of victory increased about 1 percent in both the Maine and Arizona comparison states. The adjusted differences in the changes between Maine and Arizona and their respective comparison states are derived from statistical models that account for other factors that may have explained the changes, and in the case of average margin of victory the adjusted difference is statistically significant. Our fixed effects statistical models take into account whether elections were for the House of Representatives or Senate, and whether the races included incumbents. However, our results may be sensitive to our choice of comparison states. More information on these models and our choice of states is presented in appendix I and the e-supplement accompanying this report, GAO-10-391SP. We obtained similar results when we measured the margin of victory by contrasting the percentages of close races (defined as competitive races with a margin of victory of less than 10 percentage points) and races that were landslides (defined as competitive races with a margin of victory exceeding 20 percentage points). Close races increased in Maine and Arizona after public financing was available, by about 9 and 6 percentage points respectively. The change observed in Arizona was significantly different from changes in the respective comparison states, where the percentage of close races increased only slightly or actually decreased. Landslide races also decreased in Maine (by 7 percentage points) and Arizona (by 12 percentage points). These changes were significantly different from the changes in the comparison states after controlling for the other factors in our model. Figures 8 and 9 present the year-to-year outcomes (instead of the averages for before and after public financing) for the three margin of victory measures for Maine and its comparison states, and Arizona and its comparison states. Changes in contestedness—the percentage of all races that had at least one more candidate running than the number of seats available—in Maine and Arizona before and after public financing was available were no different from changes observed in comparison states. As shown in tables 11 and 12, before public financing was available (1996 and 1998), 86 percent of the elections in Maine and 60 percent of the elections in Arizona were contested. The percentage of contested elections after public financing was available (from 2000 through 2008) increased in both states, to 91 percent in Maine and 75 percent in Arizona. However, even after controlling for other factors, these increases, of 5 percentage points and 15 percentage points respectively, were not statistically different from the changes in comparison states where percentages of contested elections increased by about 5 and 12 percentage points. Further, year-to-year changes in the percentages of contested elections in Maine and Arizona over time are not much different from in their comparison states before or after controlling for other factors, as shown in figure 10. Incumbent reelection rates (i.e., the percentage of incumbents who were reelected among those incumbents who ran in contested races) did not change significantly in Maine and Arizona before and after public financing was available. We first examined the proportion of contested races with incumbents who won relative to all contested races with an incumbent candidate. As shown in tables 13 and 14, in Maine the percentage of races in which incumbents who were challenged were reelected was 88 percent before public financing was available and about 90 percent after it was available. In Arizona, the percentage was 98 percent before public financing and 97 percent after. Incumbent reelection rates in comparison states did not change over time—staying around 93 percent and 91 percent, respectively, in the two groups of comparison states. Further, our statistical model that tested the difference in change across time periods between the states with and without campaign financing provided no evidence of any statistically significant difference. Year-to-year changes in incumbent reelection rates for races in Maine and Arizona over time are basically unchanged and not much different from in their comparison states, as shown in figure 11. We found similarly and consistently high reelection rates when we considered individual incumbent reelection rates, the proportion of individual incumbents who won out of all incumbents who ran. In Maine, 90 percent of all incumbents running in general elections races were reelected in the years before public financing was available, and 90.2 percent after. In Arizona, the individual incumbent reelection rate for general elections before public financing was available was 96.9 percent, compared to 96.1 percent after public financing was available. Research has shown that incumbent candidates may have an advantage over other candidates because of several factors, such as visibility in the media, name recognition, and the ability to perform services for constituents. Thus, the high incumbent reelection rates observed in these states despite the implementation of the public financing programs is not surprising. Many other factors we could not control in our analyses may affect electoral competition, including the popularity of candidates, extreme one- issue candidates, polarizing candidates, local ballot initiatives and issues, economic conditions, and other aspects of political context. Further, the size and statistical significance of our comparative results also may be affected by our choice of comparison states. Thus, we cannot say definitively whether any of the changes we observe can be attributed to the campaign financing programs. The candidates and interest group representatives we interviewed from Maine and Arizona provided various perspectives on the effect of the public financing programs on the advantage of incumbent candidates and the number of close races. Most candidates we interviewed in Maine (8 of 11) believed that the advantage of incumbent candidates neither increased nor decreased as a result of the public financing program. Further, 2 of 11 candidates said that incumbents’ advantage had increased under the public financing program. Among the reasons candidates gave for incumbents’ advantage was their access to resources, such as campaign databases; political party support; and officeholder privileges, such as a budget to distribute communications (e.g., mailers and newsletters) to constituents. On the other hand, 1 of the 11 Maine candidates said that the advantage of incumbents had decreased as a result of the public financing program since some incumbents have been defeated by participating candidates who may not have run for legislative office without public financing. Arizona candidates had mixed perceptions on the effect of the public financing program on incumbents’ advantage. Four of 11 candidates said that the advantage of incumbents neither increased nor decreased as a result of the public financing program, citing incumbents’ benefits such as name recognition, experience in running a successful election campaign, and access to funding. Three candidates said that incumbents’ advantage increased. One of these candidates explained that participating incumbent candidates did not have to do as much outreach to voters as they would have if they needed to raise private funds. However, 3 candidates we interviewed stated that the advantage of incumbent candidates has decreased. Among the reasons given for the decrease in incumbents’ advantage was that incumbents face more challengers under public financing. Another candidate agreed that incumbents had to work harder to defend their seats in the primary election; however, according to the candidate, incumbents’ advantage had not changed in general elections since many legislative districts are either heavily Democratic or Republican. The majority of candidates we interviewed in Maine (9 of 11) thought that the number of close legislative races increased as a result of the public financing program and provided a range of explanations for why. For example, one candidate said that before the public financing program, some candidates would run unopposed because potential challengers lacked funds, but after public financing became available, more challengers have entered races and have run competitively. However, other candidates had different perspectives that were not consistent with the statistical data we observed—one candidate said that the number of close races decreased, and one candidate said that the number of close races neither increased nor decreased as a result of the public financing program. According to this candidate, the broader political climate influenced elections more than the public financing program. In Arizona, over half of the candidates (6 of 11) believed that the public financing program had increased the number of close races. Candidates attributed the increase to greater equality in financial resources among candidates, more candidates running for office, and more extensive discussion of the issues, among other reasons. On the other hand, in contrast with the data we observed, 3 candidates we interviewed said that the number of close races neither increased nor decreased as a result of the public financing program. Additionally, 2 candidates said that the number of close races increased in the primary election, where, according to one candidate, there have been more challengers, but neither increased nor decreased in the general election, since many districts are heavily Republican or Democratic. Half of the interest group representatives we interviewed in Maine and Arizona (5 of 10) thought the closeness of races had not changed, although our data analysis did reveal changes. For example, an Arizona representative commented that the public financing program by itself had not changed the closeness of races and that redistricting and the ability of independents to vote in the primary has made the races closer. On the other hand, 2 of the 10 representatives believed that the closeness of races had changed. One representative from Maine stated that he believed there may be a few more close races because of the public financing program while an Arizona representative believed the closeness of races had changed in the primaries because more candidates have an opportunity to run with public financing and therefore may be more competitive. Finally, 2 of the 10 interest group representatives were unsure whether public financing had changed the closeness of races and 1 of the 10 interest group representatives did not respond. While increasing voter choice, as measured by changes in the number of candidates per race and changes in the breadth of political party affiliations represented in races, was a goal of public financing programs, there were no observed changes in these measures in Maine and Arizona after the public financing programs were available. However, as discussed later, candidates we interviewed provided a range of perspectives about the role of third-party and independent candidates. The average number of legislative candidates per primary and general election race in Maine and Arizona did not vary greatly over the seven election cycles examined—before (1996 and 1998 elections) and after (2000 through 2008 elections) the public financing programs became available, as shown in table 15. During the 1996 through 2008 legislative elections in Maine and Arizona, candidates from a variety of third parties and independents ran for office. In Maine, these candidates included Green Party members and independents. In Arizona, these candidates included members of the Green, Natural Law, Reform, and Libertarian Parties, as well as independents. As shown in tables 16 and 17, while there were some changes in the percent of races with third-party or independent candidates receiving 5 percent or more of votes cast—a proxy indicator for “viable” candidates—there were no discernable trends from 1996 through 2008 in Maine and Arizona. The 22 candidates from Maine and Arizona we interviewed had mixed views on the role of third parties and independents in the 2008 election and the quality and types of candidates running for election. The majority of candidates in Maine (7 of 11) and Arizona (7 of 11) said that the role of third parties and independents neither increased nor decreased as a result of the public financing programs. However, the other candidates had differing perspectives. For example, one candidate in Maine told us that public financing had increased the role of third-party and independent candidates as it has been particularly helpful for third-party candidates running against incumbent candidates. Additionally, several candidates provided comments about the effect of the public financing programs on the quality and type of candidates running for legislative office. For example, in Maine, 3 of the 11 candidates told us that the public financing program had a positive effect on voter choice, by allowing a greater diversity of candidates to run for office and by improving the quality of political debate. On the other hand, 3 other Maine candidates thought the public financing program allowed candidates to run for office who were not credible or who were unqualified. In Arizona, 2 of the 11 candidates said that the public financing program allowed candidates that were on the extremes of the political spectrum to run and win, which has resulted in a more partisan and divided legislature. However, another candidate said that many of the participating candidates are experienced incumbent candidates. Average legislative candidate spending varied from year to year in Maine and Arizona in the five election cycles that occurred after public financing became available (2000 through 2008). In Maine, average candidate spending in House races decreased statistically significantly after public financing became available as compared to the two elections before public financing was available (1996 and 1998). However, we could not attribute this decrease to the public financing program because of other factors, such as reductions made to the amounts of funding publicly financed candidates received during the 2008 elections. Average candidate spending in Maine Senate races did not change significantly. In Arizona, data were not available to compare legislative candidate spending before 2000; however, in the five elections under the public financing program, average candidate spending has increased. Independent expenditures have increased fourfold in Maine, and state officials reported that independent expenditures have increased in Arizona since 2000. While average legislative candidate spending varied from year to year in Maine, as shown in figure 12, in the five elections after public financing became available average candidate spending in House races decreased, while average Senate candidate spending did not change significantly compared to the two elections before public financing was available. Specifically, average candidate spending in Maine House races decreased from an average of $6,700 before public financing was available to an average of $5,700 after public financing became available. A state official told us that a 5 percent reduction in the set amount of public funding distributed to participating candidates for the general election likely contributed to the decrease in spending in the 2008 election. As shown in figure 13, spending by Maine legislative incumbent candidates, challengers, and open race challengers (i.e., candidates running in open races with no incumbent candidates) varied from year to year. However, overall, the difference in average spending by incumbents and challengers narrowed in both House and Senate races after public financing became available. In addition, average spending by open race challengers was relatively higher than either incumbent or challenger spending averages in House races, but was not significantly different in Senate races in the elections after public financing became available. In Maine House races, incumbents spent $1,800 more on average than their challengers in the two elections before public financing became available. In comparison, the difference in average spending by incumbents and challengers was not statistically significant in the five elections under the public financing program. Open race challengers spent more on average ($6,100) than either incumbents, who spent an average of $5,600, or challengers running against incumbents, who spent an average of $5,400, in the five elections under the public financing program. Before public financing became available, incumbents spent an average of $7,700, more than the average amount spent by challengers ($5,900) or open race challengers ($6,300) during the same period. The difference in average incumbent and challenger spending in Maine Senate races also decreased in the period after public financing became available. On average, incumbents spent nearly $10,500 more than their challengers in the two elections before public financing became available; however, after public financing became available, the difference between average incumbent and average challenger spending was not statistically significant. Similarly, spending by open race challengers in Senate races was not significantly different from spending by either incumbents or challengers in the elections after public financing became available. As figure 14 shows, average spending by participating and nonparticipating candidates varied in the five elections under the public financing program. However, overall, spending by participating candidates was not significantly different than spending by nonparticipating candidates in both Maine House and Senate races in the five elections under the public financing program. Independent expenditures in Maine legislative races have increased by about $500,000 in the five elections under the public financing program. As figure 15 shows, independent expenditures increased from about $150,000 in 2000 to a high of about $655,000 in 2006, with a large increase occurring in the 2004 election. The Director of Maine’s commission told us that he believes that the increase in 2004 was due principally to a change in the definition of independent expenditures. While independent expenditures decreased somewhat (by about $20,000) in the 2008 election compared to the 2006 election, the total amount remained high. Average candidate spending in Arizona legislative races has generally increased in the five elections under the public financing program; however, we were not able to compare these spending levels to those in the period before public financing became available. As shown in figure 16, average candidate spending in Arizona House races has increased in each subsequent election since 2000, with the exception of 2006, when average spending declined about $1,000 from the previous election. In 2008, average spending increased to $48,700, a $13,000 increase from 2006. In Arizona Senate races, average candidate spending has been increasing following the 2002 election, after a decrease of about $10,000 in the 2002 election. State officials told us that the way candidates have spent campaign funds has changed since the implementation of the public financing program. For example, they said that candidates have coordinated their campaigns with other candidates in their district to maximize their campaign resources. For example, two Republican candidates for the Arizona House of Representatives may pool their campaign funds to send out one mailing in support of both candidates, rather than each candidate sending out separate mailings. Average spending by challengers and incumbents fluctuated from year to year, with challengers spending more in some elections, and incumbents spending more in other elections in both Arizona House and Senate races, as shown in figure 17. Overall, there was no statistically significant difference between average incumbent and average challenger spending in either Arizona House or Senate races in the five elections under public financing. Further, spending by open race challengers in House races was not significantly different from spending by incumbents or challengers after public financing became available. However, in each of the five elections examined, average spending by open race challengers in Arizona Senate races was higher than average spending by incumbents or challengers, and overall, open race challengers spent between $14,600 and $16,200 more on average than either incumbents or challengers. Participating candidates spent more on average than nonparticipating candidates in Arizona House, while in Senate races nonparticipating candidates spent more on average than participating candidates in some years and less in others, as shown in figure 18. Participating candidates in Arizona House races spent $44,500 on average, compared to nonparticipating candidates, who spent an average of $29,700 in the five elections under the public financing program. In Arizona Senate races, there was not a statistically significant difference between average spending by participating and nonparticipating candidates in the five elections examined. State officials said that the amount spent on independent expenditures has increased since 2000. Therefore, they stated that matching funds distributed to participating candidates for independent expenditures may account for some of the difference in average spending by participating and nonparticipating candidates. According to state officials, independent expenditures have increased in Arizona legislative elections under the public financing program. In 2008, independent expenditures in Arizona House and Senate races totaled $2,170,000. While complete data on independent expenditures specifically in legislative elections were not available for elections prior to 2008, state officials told us that independent expenditures have increased. Furthermore, in our 2003 report, the Arizona Citizens Clean Elections Commission identified independent expenditures in the 1998, 2000, and 2002 legislative and statewide elections. Independent expenditures in both legislative and statewide races totaled $102,400 in 1998, $46,700 in 2000, and $3,074,300 in 2002. We reported in 2003 that the increase in independent expenditures in the 2002 election was largely associated with the gubernatorial race, with more than 92 percent of the independent expenditures associated with two gubernatorial candidates. The candidates and interest groups we interviewed in Maine and Arizona had a range of experiences with and views on campaign spending, independent expenditures, and issue advocacy advertisements. Candidates’ and Interest Groups’ Views on Campaign Spending While candidates and interest groups had varying views about whether campaign spending had increased in the 2008 elections, in general they indicated that equality in financial resources among candidates had increased in the 2008 election as a result of the public financing programs. In Maine, about half of the candidates (5 of 11) we interviewed said that campaign spending increased in the 2008 election as a result of the public financing program. Candidates provided a number of reasons for the perceived increase in campaign spending. For example, one candidate said that campaign spending increased because some participating candidates spent more than they would have if they had raised private funds for their campaigns. Another candidate noted that the amount of money spent by participating candidates has increased in some races because they received additional matching funds for independent expenditures made by interest groups. Spending by nonparticipating candidates may have increased in some cases as well, according to one candidate, since the presence of a participating candidate in the race forces nonparticipating candidates to take the election more seriously and spend more on their campaigns than they would have otherwise. However, 3 other candidates we interviewed in Maine contended that campaign spending had decreased. For example, one candidate noted that spending had decreased because of the spending cap placed on participating candidates. Three candidates felt that spending in Maine legislative races had neither increased nor decreased as a result of the public financing program. In one candidate’s view, contribution limits have had a greater influence on spending than the public financing program. In Arizona, the majority of candidates (7 of 11) we interviewed believed that candidate spending increased in the 2008 election as a result of the public financing program. One nonparticipating candidate told us that, because of the matching funds provision of the public financing program, in 2008 he spent almost double the amount than he spent in any previous campaign in order to get out his message and outspend his participating opponent. Another candidate commented that the increase in independent expenditures has driven up campaign spending by triggering additional matching funds for participating candidates. On the other hand, 3 candidates felt that the public financing program led to a decrease in campaign spending in the 2008 election. One participating candidate explained that she could have raised more money traditionally than she received from the public financing program. One candidate indicated that spending neither increased nor decreased. Regarding interest groups in Maine, two of the five representatives stated that candidate spending increased. One of these representatives commented that there has been an increase in money spent by candidates because there is more access to money and the races are more competitive. Further, this representative stated that the public financing program gives challengers an opportunity to level the playing field when running against incumbents. Participating candidates who would otherwise not be able to raise enough private money can run a well- financed campaign using public funds and have an opportunity to present their issues for debate in the race. On the other hand, three of the five interest group representatives stated that candidate spending neither increased nor decreased in the 2008 election as a result of the public financing program. One of these representatives commented that the amount of money spent by candidates has not changed because limits are set by the legislature. However, this representative opined that the amount of money spent on behalf of the candidates in the form of independent expenditures had increased dramatically and consistently. He went on to say that the public financing program is reducing the disparity between the candidates who can raise the money and those candidates who cannot raise the money and that a candidate who is not serious can receive as much money as a serious candidate. In Arizona, four of the five interest group representatives believed that candidate spending increased as a result of the public financing program. For example, one of these representatives said that the public financing program has moved money from the candidates to independent expenditures, and that political parties are playing a significant role in this shift. Another interest group representative believed that candidate spending increased but was unsure if this increase was due to the public financing program, noting that increased campaign spending could be attributed to more competitive races or the rise in the cost of campaign materials due to inflation. Further, he noted that a pattern has emerged in which candidates run as participating candidates during their first election, and after being elected run subsequently as nonparticipating candidates. These legislators have name recognition and can raise the money required to run their campaigns and can also help other candidates get elected. On the other hand, one of the five representatives believed that campaign spending had neither increased nor decreased and that money has been redirected from the candidate campaigns to independent expenditures. He did not believe that his organization was spending any less money on campaigns. In general, candidates and interest group representatives in Maine and Arizona reported that equality in financial resources among candidates had increased in the 2008 election as a result of the public financing programs. In Maine, the majority of the candidates interviewed (7 of 11) said that equality in financial resources among candidates increased as a result of the public financing program. Two candidates commented that candidates from different political parties compete on a roughly equal playing field under the public financing program. Another Maine candidate said that both nonparticipating and participating candidates spend about the same amount on their campaigns. However, 2 candidates we interviewed said that equality in financial resources had decreased as a result of the public financing program. According to one candidate, more money may be spent by political action committees than by candidates in a race, which can reduce equality. One nonparticipating candidate responded that the public financing program increased equality in financial resources among participating candidates, but decreased equality in financial resources among nonparticipating candidates, and 1 candidate was not sure how the public financing program had influenced equality in financial resources among candidates. In Arizona, about half of the candidates (6 of 11) thought equality in financial resources among candidates had increased. Two of these candidates commented that in their experience, candidates spent roughly the same, regardless of their political party affiliation or if they participated in the public financing program or used traditional means to finance their campaigns. On the other hand, 1 candidate said that equality in financial resources had decreased, and commented that he was outspent by his participating opponents by a ratio of 13 to 1. Three of the 11 candidates we interviewed said that the equality in financial resources neither increased nor decreased as a result of the public financing program. For example, one candidate told us that incumbents continue to outspend their opponents and that nonparticipating candidates have developed strategies to maximize their financial advantage, such as raising funds at the end of the campaign so participating candidates have little time to spend matching funds. The remaining candidate was not sure about the change in resource equality. Seven of the 10 interest group representatives we interviewed in Maine (3 of 5) and Arizona (4 of 5) said that equality in financial resources among candidates as a result of the public financing programs had increased. For example, an Arizona interest group representative commented that the public financing law holds the candidates’ financial resources even. On the other hand, 1 of the 5 representatives from Maine stated that equality in financial resources among candidates decreased and commented that since monetary limits are set statutorily, it is the independent expenditures that skew the financial resources among candidates. Finally, 2 of the 10 representatives, 1 from Maine and 1 from Arizona, believed that equality in financial resources neither increased nor decreased, while 1 of these representatives further commented that even though financial resources stayed the same, some nonparticipating candidates had a financial advantage because they asked for larger donations from interest groups. Candidates’ and Interest Group Representatives’ Views on Independent Expenditures Independent expenditures were of varying importance in the races of the candidates we spoke with. The majority of the Maine legislative candidates we interviewed (7 of the 11) reported that independent expenditures were of little or no importance to the outcome of their races in the 2008 election. One candidate explained that no independent expenditures were made on his behalf because he was perceived to be the likely winner. However, 2 candidates we interviewed said that independent expenditures were moderately important, and 2 candidates said that independent expenditures were extremely or very important to the outcome of their races in the 2008 election. The candidates who had independent expenditures made in their races shared their experiences with us. One candidate said an independent expenditure made on his behalf could have possibly hurt his campaign since the expenditure was for a mailer that was poorly conceived and included a photograph of him that was of low quality. Another candidate who participated in the public financing program in Maine said that she and her participating opponent received large amounts of matching funds in response to independent expenditures made by business, trucking, state police, and equal rights groups that went towards mailings, television ads, and newspapers ads. However, the candidate thought that other factors played a greater role in the outcome of her election. In Arizona, independent expenditures reportedly played an important role in the outcome of 6 of the 11 candidates’ races, with 5 candidates saying that independent expenditures were moderately important and 1 candidate reporting that independent expenditures were extremely important. One of these candidates said that groups made independent expenditures on behalf of his opponent to produce a number of mailers as well as billboards and television commercials that hurt his election campaign by shifting the focus away from the issues that he had concentrated on. Another candidate said that groups made independent expenditures opposing her near the end of her 2008 campaign; however, since she participated in the public financing program, she received matching funds and was able to respond. On the other hand, 5 candidates reported that independent expenditures were of little or of no importance in the outcome of their races. One candidate said that while there was a lot of money spent on independent expenditures in his race, the independent expenditures did not play a big role in the outcome of the election since roughly the same amount was spent on behalf of both him and his opponent. Another candidate explained that since she was an incumbent and her reelection was secure, not much was spent on independent expenditures in her race. Eight of the 10 interest group representatives in Maine (5 of 5) and Arizona (3 of 5) we interviewed said their groups made independent expenditures in support of candidates in the 2008 elections; although, the representatives had varying views about the influence the expenditures had on the outcome of the races. All 5 Maine interest group representatives made independent expenditures in the 2008 elections, and all expenditures included mailers in support of candidates. Three of these 5 Maine representatives were not sure how much influence the expenditures had on the outcome of the elections. On the other hand, the remaining 2 representatives had different views. One Maine representative believed that her group’s expenditures were effective in getting the candidate’s message out to the voters. Finally, another Maine representative, who made several independent expenditures, said his experience was mixed, and the candidates he made independent expenditures on behalf of lost in more cases than they won. In Arizona, 3 of the 5 interest groups made independent expenditures. Two of these representatives said the expenditures were for mailers in support of candidates and believed that they were beneficial because the candidates won. The third representative said that his group made expenditures for both positive and negative mailers, and he believed that the expenditures were ineffective and was not sure what role they played in the outcome of the 2008 elections. Candidates’ Views on Issue Advocacy Spending While Maine and Arizona legislative candidates we interviewed offered varying views on issue advocacy spending, 14 of the 22 candidates stated that issue advocacy advertisements were of little of no importance to the outcome of their races in the 2008 elections. Issue advocacy spending is often viewed as those forms of media advertisements that do not expressly advocate for or against a clearly identified political candidate. For example, such issue advocacy ads do not use terms like “vote for,” “vote against,” or “reelect.” In general, courts have not upheld campaign finance law regulation of issue advocacy spending upon the reasoning that the rationales offered to support such regulations did not justify the infringement upon constitutional free speech protections. According to state officials in Maine and Arizona, neither Maine nor Arizona track issue advocacy spending. In Maine, 7 of the 11 candidates we interviewed reported that issue advocacy advertisements were of little or of no importance to the outcome of their races. One of these candidates explained that his race was not targeted by issue advocacy ads because he was expected to win and his opponent was not perceived to be very competitive. Another candidate we interviewed had a negative issue advocacy ad made in his race, but he did not think it affected the outcome of the election. The candidate told us the issue advocacy ad listed the tax increases he voted for alongside a smiling picture of him; however, according to the candidate, the ad only told half of the story, since the bill that contained the tax increases was revenue neutral and raised some taxes while lowering others. In contrast, 4 of the 11 candidates we interviewed said that issue advocacy was moderately important in their 2008 races. For example, 1 of these candidates said that issue advocacy advertisements highlighting the candidates’ positions on education issues was a factor in the outcome of his race. Similarly, in Arizona, the majority of candidates interviewed (7 of 11) said that issue advocacy ads were of little or no importance to the outcome of their races in the 2008 election. For example, 1 candidate told us that he did not think that issue advocacy ads made a difference in his race because the ads did not mention the candidates’ names. Another candidate said that there were some issue advocacy ads that played an information role in his race by presenting a comparison of the candidates’ beliefs; however, the candidate thought the ads were of little importance in the outcome of the election. On the other hand, 3 candidates said that issue advocacy ads were moderately important to the outcome of their races. According to one candidate, issue advocacy advertisements on crime, abortion, and education funding influenced the outcome of his race. Our surveys of voting-age citizens and interviews with candidates and interest group representatives in Maine and Arizona indicated that the public campaign financing programs did not decrease the perception of interest group influence and did not increase public confidence in government. However, candidate and interest group representatives reported that campaign tactics, such as the role of political parties and the timing of expenditures, had changed. In 2009, the percentage of voting-age citizens in Maine and Arizona who said that the public financing law had greatly or somewhat increased the influence of special interest groups on legislators was not significantly different from those who said that the law had greatly or somewhat decreased special interest group influence. For example, among those polled in Maine in 2009, the percentage of voting-age citizens who said that the influence of interest groups greatly or somewhat increased was 17 percent, while 19 percent said that the interest group influence had greatly or somewhat decreased, as shown in table 18. An additional 19 percent felt that the law had no effect on the influence of interest groups on legislators. In Arizona in 2009, 24 percent believed the public financing law greatly or somewhat increased the influence of interest groups, while 25 percent felt it greatly or somewhat decreased interest group influence. Additionally, 32 percent of those polled indicated that the public financing law had no effect on the influence of interest groups. Both Maine and Arizona candidates and interest group representatives had mixed views about changes in interest group influence as a result of the public financing programs in their states. In Maine, a little over half of the candidates (6 of 11) said that the likelihood that elected officials serve the interests of their constituents free of influence by specific individuals or interest groups neither increased nor decreased as a result of the public financing program. One of these candidates said the public financing program has not met the goal of decreasing the influence of interest groups, since interest groups will always find ways to influence legislators and the election process. However, 4 candidates we interviewed in Maine—all of whom participated in the public financing program—said that that likelihood that elected officials serve free of influence by individuals or groups greatly increased or increased. One of these candidates explained that participating candidates are more empowered to serve as they see fit and are less willing to listen to political party leadership. On the other hand, a different candidate said that the elected officials are less likely to serve free of influence by specific individuals or groups as a result of the public financing program. The candidate explained that under the public financing program, lobbyists and special interest groups have focused less on individual candidates, and more on winning favor with the Democratic and Republican party leadership. According to this candidate, interest groups are spending more, since the contribution limits do not apply to contributions to political parties. In turn, the candidate said that political parties are buying the loyalty of candidates by providing know-how, campaign staff, and polling data during the election. For Arizona, about half of the candidates interviewed (5 of 11) said that the public financing program did not affect the likelihood that elected officials serve the interests of their constituents free of influence by specific individuals or groups. One of these candidates said that the influence of special interest groups still exists, even if it does not come in the form of direct contributions. She explained that interest groups approach candidates with questionnaires and ask them to take pledges on different policy issues and also send their members voter guides and scorecards that rate candidates. Two other Arizona candidates we interviewed commented that under the public financing program, interest groups have been contributing to campaigns in different ways, such as providing campaign volunteers, and collecting $5 qualifying contributions for participating candidates. In contrast, 4 of the 11 candidates said that the likelihood that elected officials serve the interests of their constituents had decreased as a result of the public financing program. One of these candidates explained that the role of interest groups has increased, as they have become very skilled at producing advertisements with independent expenditures. On the other hand, 2 candidates we interviewed said that the public financing program increased the likelihood that elected officials serve the interests of their constituents free of influence by specific individuals or groups. One of these candidates said that in her experience as a participating candidate and state senator, interest groups are not “in her ear all of the time,” and legislators are free to make decisions based on the interests of their constituents. With regard to Maine interest groups we interviewed, the five representatives we interviewed had varying views about the likelihood that elected officials serve the interests of their constituents free of influence by specific individuals or interest groups and about changes in interest group influence as a result of the public financing program. Two representatives believed that the likelihood that elected officials serve the interests of their constituents free of influence had increased, and one representative stated that it had decreased. The two remaining representatives stated that it had neither increased nor decreased. One of these representatives commented that candidates are predisposed to certain issues based on their core beliefs and there is not any correlation between public financing and the likelihood that the elected officials will serve the interests of their constituents free of influence. With regard to changes in interest group influence, three Maine representatives stated that they have less of a relationship with candidates. One of these three representatives stated that interest groups are one step removed from the candidate because to make independent expenditures they cannot directly coordinate with the candidate. As a result, this representative further stated that interest groups have established stronger relationships with political parties. Another of these representative believed that the public financing program has slightly decreased the role of interest groups because money tends to be funneled through the political parties. Also, there has been more emphasis on interest groups giving their endorsements of candidates rather than giving them money. With regard to Arizona interest groups we interviewed, four of the five representatives said that the likelihood that elected officials serve the interests of their constituents free of influence by specific individuals or interest groups neither increased nor decreased as a result of the public financing program. One of the five representatives stated that the likelihood that elected officials serve the interests of their constituents had decreased but did not elaborate. Regarding interest group influence, two of the five representatives expressed opinions about whether changes in interest group influence as a result of the public financing programs have occurred. For example, one of these representatives stated that prior to the public financing program, interest groups made direct contributions to the candidates, but now they have to make independent expenditures or give money to the political parties. This representative stated that public financing has led to fringe candidates entering races and has caused a polarization in the legislature that has decreased the role of interest groups. Another representative stated that the interest groups do not directly support the candidate’s campaigns and, instead, make independent expenditures. He also stated that there has also been an increased emphasis on volunteer campaign activities in which interest groups use their members to help certain candidates. In 2002 and 2009, the percentage of voting-age citizens in Maine and Arizona who said that their confidence in state government had somewhat or greatly decreased was not significantly different from those who said that their confidence had somewhat or greatly increased as a result of the public financing law. Additionally, the predominant response in both states was that respondents did not believe that the public financing program had any effect on their confidence in state government, as shown in table 19. For example, in Maine in 2009, the percentage of voting-age citizens who stated that the public financing law had no effect was 42 percent while the percent who felt that their confidence had somewhat or greatly increased was 20 percent, and the percent who felt their confidence had somewhat or greatly decreased was 15 percent. In Arizona, the percentage of voting-age citizens who stated that the public financing law had no effect was 39 percent in 2009, while the percent who felt that their confidence had somewhat or greatly increased was 26 percent, and the percent who felt their confidence had somewhat or greatly decreased was 22 percent. In Maine and Arizona, over half of the candidates we interviewed reported that the public’s confidence in government had not changed as a result of the public financing programs. Over half of the candidates in Maine (6 of 11) said the public’s confidence in government neither increased nor decreased as a result of the public financing program. One of these candidates explained that he did not think many people were aware of the public financing program. Three of the 11 candidates said that the public’s confidence in government decreased in 2008 because of the public financing program. According to one candidate, people have been disappointed with the quality of the candidates, and their confidence in government has decreased as a result. On the other hand, 2 of the 11 candidates said that the public financing program increased the public’s confidence in government. In Arizona, 7 of the 11 candidates interviewed said that the public’s confidence in government neither increased nor decreased as a result of the public financing program. The remaining 4 candidates were divided. Two candidates said that the public’s confidence in government greatly increased or increased as a result of the public financing program. One of these candidates commented that the public financing program goes beyond providing public financing by providing a public forum and publications that play an important role in informing voters about the races and candidates. However, 2 candidates said that the public’s confidence in government has decreased as a result of the public financing program. One of these candidates said that the public financing program resulted in a more divisive government, which has slightly decreased the public’s confidence in government. Candidates and interest group representatives in Maine and Arizona provided a range of perspectives on how campaign tactics have changed under the public financing programs. Their observations included changes in how money is spent and the role of political parties and the timing of campaign activities. Candidates in both Maine and Arizona identified changes regarding how money is spent and the role of political parties since the implementation of the public financing programs. For example, in Maine, one candidate told us that private funding that would have gone directly to fund candidate campaigns has been redirected to political parties, who strategically focus their resources in certain races to help elect their candidates. Candidates reported that political parties have helped support candidates by providing advice, polling services, campaign volunteers, distributing campaign literature, and making automated telephone calls to constituents on behalf of candidates. According to another Maine candidate, political parties are advising candidates to participate in the public financing program so that the political parties and political action committees can raise more money for their organizations. However, other candidates had different perspectives on the role of political parties under the public financing program. For example, one candidate told us that since participating candidates receive public financing, they are less dependent on political parties for money, less willing to listen to the party leadership, and are more empowered to make their own decisions. Candidates in Arizona reported similar changes in how money is spent and the role of political parties. For example, one candidate commented that now more money is being funneled through the political parties rather than being directly provided to the candidates. Another candidate said that political parties have used the public financing program as a vehicle, explaining that when candidates use public funds for their campaigns, the money that would have normally gone to the candidate is now diverted to other candidates or causes. According to one candidate, after public financing, political parties are more active and have more extensive field operations to support candidates in a greater number of districts. Further, four candidates said that political parties are gaming the public financing system to maximize support for their candidates. For example, one candidate explained that if two Republican candidates or two Democratic candidates were running in the same multimember district, then partisan groups could make independent expenditures on behalf of one candidate that would trigger matching funds for the other participating candidate. However, two candidates said their party did not get involved in their races in the 2008 election, and one candidate said she did not observe any change in the role of political parties since the implementation of the public financing program. Furthermore, one candidate said that under the public financing program, candidates have more independence from political parties, noting that she relies on support from a broad constituency in her district, not just from her political party. Interest group representatives in both Maine and Arizona identified changes regarding how money is spent and the role of political parties since the implementation of the public financing programs. For example, in Maine, one interest group representative stated that the group coordinates its expenditures through the party caucus committees and other interest groups. He said that substantial contributions from the caucuses are now made to candidate campaigns without the candidates’ knowledge. These committees are also engaged in public polling on an ongoing basis to identify voting patterns and constituent concerns in order to identify candidates to support. Another representative stated that because she made contributions to the political party, she does not have a way to know how her political action committee money is being spent because the committee makes independent expenditures on behalf of the candidate. Further, she stated that for participating candidates, the only thing an interest group can do is give an endorsement. In turn, participating candidates use these endorsements in their campaign advertisements. On the other hand, one representative said that there has not been much difference in campaign tactics since public financing has been available, and another representative said that the same tactics, such as direct mailers and going door-to-door for monetary solicitations, have been used. Interest group representatives in Arizona similarly reported that public financing has changed campaign strategies. For example, one representative said that there is an increased reliance on volunteer activities, especially for statewide races. This representative stated that the amount of public funds for statewide candidates is not adequate, so candidates must rely on volunteers to get their message out. Volunteer activities, such as handing out flyers door-to-door or working phone banks to call voters, have become increasingly important. Another representative stated that since more candidates are participating in the public financing program and cannot accept direct contributions, there is more money available to nonparticipating candidates. He has noticed that nonparticipating candidates are asking for more money from interest groups than before public financing. According to another representative, campaign strategies are evolving. For example, a recent strategy has been the teaming of public and private candidates to maximize their resources such as on mailers. Candidates and interest group representatives in Maine and Arizona also commented on how the public financing program has changed the timing of some campaign activities. In Maine, three candidates said that candidates or interest groups are changing the timing of spending in order to minimize either the amount or the effectiveness of matching funds distributed to opponent participating candidates. For example, one participating candidate told us that supporters of his opponent distributed mailers right before the date when communications that support or oppose clearly identified candidates are presumed to be independent expenditures and trigger matching funds for participating opponents. Another strategy, according to one candidate, is for nonparticipating candidates or interest groups to spend money in the days immediately before the election, when participating candidates’ ability to use the money is effectively restricted due to time constraints. In response, one candidate told us that participating candidates have television, radio, or other advertisements ready in case they receive additional matching funds that need to be spent quickly. In Arizona, five candidates said that the tactics surrounding the timing of campaign spending have changed since the implementation of the public financing program. For example, one candidate said that the start of the campaign season is determined by the date on which spending by or on behalf of candidates triggers matching funds. In addition, one candidate explained that nonparticipating candidates have changed the timing of fundraising efforts, so that more funds are raised at the end of campaign, when it is more difficult for participating candidates to spend matching funds effectively. As in Maine, one candidate in Arizona said that participating candidates have responded to this tactic by preparing advertisements ahead of time, just in case they receive additional matching funds. Interest group representatives in Maine and Arizona also commented on how the public financing program has changed the timing of some campaign activities. In Maine, one interest group representative stated that candidates were strategically timing their advertisements to gain a competitive advantage. For example, candidates are thinking about from whom to get their seed money, and when to qualify for the money. In addition, usually, incumbents have an advantage because they can send out newsletters to constituents close to the election without triggering matching funds. This representative stated the biggest consideration regarding campaign strategies is how and when matching funds will be triggered by the independent expenditures. She said that independent expenditures are made in the last 5 days before an election on the assumption that the opposing participating candidate cannot make effective use of the matching funds. In Arizona, one interest group representative said that generally nonparticipating candidates control the timing of their fundraising and spending, and participating candidates make plans to spend matching funds to counter last-minute attack advertisements. While increasing voter participation, as indicated by increases in voter turnout, was a goal of public financing programs in Maine and Arizona, limitations in voter turnout data, differences in how voter turnout is measured across states and data sources, and challenges isolating the effect of public financing programs on voter turnout hindered the analysis of changes over time. Voter turnout is typically calculated as a percentage of the voting-age population (VAP) or voting-eligible population (VEP) who cast a ballot in an election. The calculation of changes in voter turnout over time depends less on the specific data used for the numerator and denominator than it does on the consistency of how these data were collected over time and the use of comparable time frames and types of elections (e.g., presidential and congressional races). However, data reporting issues, changes in measurement, and other factors affect the calculation of voter turnout estimates. With respect to data limitations, data on voter turnout are not consistent across states or data sources. Depending on the source, the numerator of the turnout calculation (i.e., who cast a ballot in an election) may include the total number of approved ballots cast, the number of ballots counted whether or not they were approved, self-reports of voting information, or the number of ballots cast for the highest office on a ticket, such as for president. For example, official voter turnout data compiled by the Election Assistance Commission (EAC) are based on surveys of states; however, states vary in their policies, for example, related to voter registration, as well as in which turnout statistics they report. Some states report voter turnout as the highest number of ballots counted, whereas other states report voter turnout as the number of votes for the highest office. Further, which specific statistic is reported is not necessarily constant over time. For example, EAC data prior to 2004 provide voter turnout based on the number of votes for the highest office on the ticket. Beginning in 2004, EAC reported total ballots cast, counted, or total voters participating for Maine and Arizona, but has not consistently reported the vote for highest office in these states. The Federal Election Commission (FEC) also provides information on turnout for federal elections, but the specific highest office in a given state and year could be for a state office such as governor that would not be reported along with federal election results. Other voter turnout statistics, including those based on surveys of U.S. residents as part of the Census Bureau’s Current Population Survey (CPS) or the American National Election Studies, rely on respondents’ self- reports of voting behavior. However, self-reports of voting behavior are subject to overreporting because many respondents perceive that voting is a socially desirable behavior. Additionally, estimates of voting based on self-reports can fluctuate depending on the wording of the question used in a survey. Further, survey results are generalizable only to the population covered by the survey, or sampling, frame. The CPS sampling frame excludes individuals living in group quarters such as nursing homes, meaning that estimates of turnout based on CPS data would not include turnout among these individuals. Data on other forms of voter participation, such as volunteering for a campaign, contacting media, donating money, fundraising, and contacting representatives on issues of concern, are limited because they are rarely collected with the express purpose of making state-level estimates, and surveys with this information are not usually designed in a manner to allow comparison across individual states over many years. In addition, measurements of the denominator of voter turnout differ with respect to whether citizenship or other factors that affect eligibility are taken into account. Turnout estimates produced by the Census Bureau have historically used the VAP as a denominator, which includes those U.S. residents age 18 and older. In theory, a more accurate estimate of voter turnout can be made by adjusting VAP to account for the statutory ability of individuals to vote, in particular by removing noncitizens from the estimate. This is particularly important at the state level because the proportion of noncitizens varies across states and over time. However, the practical application of such adjustments may be complicated by the timing of available data relative to the date of the election or by other data limitations. For example, although the Census Bureau began to produce estimates of a citizen VAP for EAC in 2004, the estimate is calculated as of July 1 of the election year and does not adjust for population changes that may occur between July and the time of the election. Other alternatives for adjusting VAP for citizenship include calculating estimates between decennial Census surveys. The Census data currently provided to EAC include adjustments for citizenship based on another alternative, the American Community Survey (ACS). The ACS uses a different sampling frame than other surveys used to adjust for citizenship, such as the CPS, and has slightly different estimates of citizenship. In addition to adjustments for citizenship, researchers have also adjusted VAP to account for other factors that affect eligibility to vote, including state felony disenfranchisement laws, and overseas voting, among others. To make these adjustments, researchers use alternative data sources such as information on the population in prison from the Department of Justice’s Bureau of Justice Statistics; however, these adjustments cannot always be applied similarly because of differences across states over time (such as in the proportion of probationers that are felons). Lastly, changes in voter turnout cannot be attributed directly to public funding as there are a number of factors that affect voter turnout. Voter turnout can be affected by demographic factors such as age, income, how recently a person registered to vote, and previous voting history. For example, studies have shown that much higher percentages of older Americans vote than do younger citizens. Voter turnout can also be influenced by a broad range of contextual factors, including the candidates and their messages, mobilization efforts, media interest, campaign spending, and negative advertising. These potential confounding factors, along with aforementioned difficulties in calculating precise and consistent turnout information at the state level, prevented us from quantifying the extent to which, if any, Maine’s and Arizona’s public financing programs affected these states’ voter turnout. Additional information about factors influencing the determination of changes in voter participation in Maine and Arizona can be found in the e-supplement accompanying this report—GAO-10-391SP. Seven years ago our 2003 report concluded that with only two elections from which to observe legislative races—2000 and 2002—it was too early to precisely draw causal linkages to resulting changes. Today, following three additional election cycles—2004, 2006, and 2008—the extent to which there were changes in program goals is still inconclusive. There were no overall observable changes in three of the four goals, and we cannot attribute observed changes with regard to the winner’s victory margin in Maine and Arizona to the public financing programs because other factors, such as changing economic conditions and candidate popularity, can vary widely and affect election outcomes. Further, essential data needed, such as uniform voter registration and turnout data across states, do not currently exist to enhance analyses conducted and, in the case of the fifth goal, increasing voter participation, to allow for analysis of changes. While undertaking considerable efforts to obtain and assemble the underlying data used for this report and ruling out some factors by devising and conducting multiple analytic methods, direct causal linkages to resulting changes cannot be made, and many questions regarding the effect of public financing programs remain. Public financing programs have become an established part of the political landscape in Maine and Arizona and candidates have chosen to participate or not participate based on their particular opponents and personal circumstances and values. The public financing program is prevalent across these states, and in each election cycle new strategies have emerged to leverage aspects of the public financing program by candidates and their supporters to gain advantage over their opponents. The trend of rising independent expenditures as an alternative to contributing directly to candidates is clear and its effect is as yet undetermined. We requested comments on this draft from the Maine and Arizona Offices of the Secretary of State and the commissions overseeing the public financing programs in each state. We received technical comments from the Arizona Citizens Clean Elections Commission, which we incorporated as appropriate. We did not receive any comments from the other agencies. We are sending copies of this report to interested congressional committees and subcommittees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. illiam O. Jenkins, Jr. In accordance with the congressional direction specified in Senate Report 110-129 to revisit and update our 2003 report on the public financing programs in Maine and Arizona to account for data and experiences of the past two election cycles, this report provides data related to candidate program participation, including the number of legislative candidates who chose to use public funds to run for seats in the 2000 through 2008 elections in Maine and Arizona and the number of races in which at least one legislative candidate ran an election with public funds; and describes statistically measurable changes and perceptions of changes in the 2000 through 2008 state legislative elections in five goals of Maine’s and Arizona’s public financing programs—(1) increasing electoral competition by, among other means, reducing the number of uncontested races (i.e., races with only one candidate per seat in contention); (2) increasing voter choice by encouraging more candidates to run for office; (3) curbing increases in the cost of campaigns; (4) reducing the influence of interest groups and, thereby, enhance citizens’ confidence in government; and (5) increasing voter participation (e.g., voter turnout)—and the extent to which these changes could be attributed to the programs. To obtain background information and identify changes since our 2003 report, we conducted a literature search to identify relevant reports, studies, and articles regarding the public financing programs in Maine and Arizona, as well as campaign finance reform issues generally, which had been published since May 2003 when our report was issued. Based on our literature review, discussions with researchers who have published relevant work on public financing programs or state legislatures, and suggestions by state officials in Maine and Arizona, we interviewed 9 researchers and 17 representatives of advocacy groups and other organizations concerned with campaign finance reform or issues related to state legislative elections. See the bibliography for a listing of the reports and studies we reviewed. We reviewed the state statutes governing the public financing program— Maine’s Clean Election Act and Arizona’s Citizens Clean Elections Act— from 2002 through 2009 and other documentation related to the public financing program, such as candidate handbooks and annual reports, to determine any changes in the programs since our 2003 report. In addition, we interviewed state election officials in the commissions responsible for administering the two programs—Maine’s Commission on Governmental Ethics and Election Practices and Arizona’s Citizens Clean Elections Commission. We also interviewed officials in Maine’s and Arizona’s Offices of the Secretary of State, the agencies responsible for supervising and administering state elections activities, such as tabulating official election results. Through our discussions with Maine and Arizona state officials and our review of changes to the public financing statutes in both states from 2002 through 2009, we determined that the five goals of the public financing programs, as set out in our 2003 report, have not changed. We reviewed the Web sites of Maine’s Commission on Governmental Ethics and Election Practices (www.state.me.us/ethics) and Arizona’s Citizens Clean Elections Commission (www.azcleanelections.gov) to obtain information on the public financing programs, such as candidate handbooks and forms necessary in order to run for office. Additionally, we reviewed information on state elections on the Web sites of Maine’s Secretary of State (http://www.maine.gov/sos) and Arizona’s Secretary of State (http://www.azsos.gov). Officials from these state agencies told us that their respective Web sites were current and reliable for our review. In addressing the objectives, we obtained and analyzed, to the extent possible, available statistical data from Maine’s and Arizona’s commissions and Secretaries of State offices on candidate program participation, election outcomes, and reported campaign spending from the 1996 through 2008 state legislative elections. We assessed the quality and reliability of electronic data provided to us by officials in Maine and Arizona by performing electronic testing for obvious errors in accuracy and completeness; validating the data using other sources; and reviewing the associated documentation, such as system flow charts. We also interviewed state officials about their data systems and any issues or inconsistencies we encountered with the processing of the data. When we found discrepancies, such as nonpopulated fields, we worked with state officials to correct the discrepancies before conducting our analyses. Based on these tests and discussions, we determined that the data included in the report were sufficiently reliable for our purposes. Although the public financing programs in Maine and Arizona cover both legislative and certain statewide offices, we limited the scope of our review to legislative candidates, since most of the elections for statewide offices occurred every 4 years and sufficient data would not have been available to conduct our analyses and draw conclusions. To obtain candidates’ and interest groups’ perspectives about the public financing programs, we conducted telephone interviews with a nonprobability sample of 22 out of 653 candidates who ran in 2008 state legislative primary and general elections in Maine (11 out of 452 candidates) and Arizona (11 out of 201 candidates). We conducted interviews with candidates from each state from June through September 2009. We selected these candidates to reflect a range of those who did and did not use public financing, won or lost in primary and general elections, had different political party affiliations, ran for election in different legislative chambers, and were incumbents and challengers. In our interviews, we asked similar, but not identical, questions to those from our 2003 report. Specifically, we included questions about the candidates’ views on factors influencing their decision to participate or not participate in the public financing program, the effects of the public financing program on electoral competition and campaign spending, and changes in the influence of interest groups on elections. We coded the candidates’ responses to the interview questions and conducted a content analysis to categorize responses and identify common themes. Further, we interviewed a nonprobability sample of 10 interest group representatives—5 in Maine and 5 in Arizona. In Maine, we selected these interest groups from a listing of approximately 80 registered interest groups provided by a Maine state official. In Arizona, we selected interest groups from a total of approximately 220 interest groups, which we identified through the Arizona Secretary of State campaign finance Web site as contributors to campaigns during the 2008 election cycle. We selected these interest groups based on several factors, including industry sectors, such as communications or construction, range of contributions made to political campaigns, and availability and willingness of the representatives to participate in our interviews. Results from these nonprobability samples cannot be used to make inferences about all candidates or interest groups in Maine and Arizona. However, these interviews provided us with an overview of the range of perspectives on the effects of the public financing programs. Results from the candidate interviews are included in report sections regarding candidate participation, voter choice, electoral competition, campaign spending, and interest group influence. Results from the interest group interviews are included in report sections regarding electoral competition, campaign spending, and interest group influence. Further details about the scope and methodology of our work regarding each of the objectives are presented in separate sections below. To provide data related to candidate program participation, including the number of legislative candidates who chose to use public funds to run for legislative seats (“participating candidates”) in the 2000 through 2008 elections in Maine and Arizona and the number of races in which at least one candidate ran an election with public funds, we obtained data from Maine’s and Arizona’s commissions and Offices of the Secretary of State. Specifically, for each state, we obtained or calculated data showing the number of legislative candidates who chose to use public funds to run for seats in the 2000 through 2008 elections, the seats (i.e., House or Senate) for which they were candidates, the political party affiliation of the candidates, whether the candidates were incumbents or challengers, whether the candidates were successful in their races, and the number of races in which at least one legislative candidate ran an election with public funds. As used in our report, “challengers” consist of all nonincumbent candidates. Thus, a candidate who was not an incumbent is called a challenger, even if that candidate did not face an opponent. Also, in counting races to calculate the proportion of races with at least one participating candidate, we included all races in which there was a candidate on the ballot regardless of whether or not the candidate faced a challenger. Additionally, we eliminated from our analyses all candidates with zero votes and write-in candidates whose names did not appear on the ballot. In designing our approach to assess electoral competition, we first reviewed literature published since our 2003 report and interviewed researchers and representatives of organizations and advocacy groups who are concerned with public financing and campaign finance reform issues in general. Specifically, we reviewed articles and interviewed researchers and representatives of organizations concerned with public financing issues who had conducted studies and research on electoral competition in states. Based on our review of the literature and these discussions, we concluded that there is no agreement on a standardized methodology to measure electoral competitiveness in state legislative elections. Thus, we used many of the same measures of electoral competition as those in our 2003 report, including the winners’ victory margins, which refers to the difference between the percentage of the vote going to the winning candidate and the first runner up; percentages of contested races; which refers to the percentage of all races with at least one more candidate running than the number of seats in contention; and, incumbent reelection rates, which refers to the percentage of incumbents who were reelected. To assess changes in electoral competition in Maine and Arizona, we examined changes in these three measures of electoral competition in state legislative races by comparing the two elections before public financing became available to the five elections with public financing. However, unlike our 2003 report, we obtained and analyzed general election data from 1996 through 2008 from four comparison states that did not offer public financing programs for legislative candidates to determine if changes identified in Maine’s and Arizona’s general election outcomes for that same time period were similar to or different from changes observed in the four comparison states. We selected the four comparison states—Colorado, Connecticut, Montana, and South Dakota—based on a number of factors, including geographic proximity to Maine or Arizona; the capacity of the state legislature; the presence of legislative term limits; structure of the state legislature, such as legislative districts with more than one representative; and district size. In selecting our comparison states, we also reviewed other factors such as demographic and economic characteristics, including age, race, and poverty levels, and urban/rural population distribution, and recommendations from researchers and experts with knowledge of state legislatures we interviewed. Although all states were potentially candidates for comparison to Maine and Arizona, we eliminated some states (such as those with odd-year election cycles or a unicameral legislature) from our review because of their dissimilarity to Maine and Arizona, and focused primarily on those states that were recommended to us by researchers and experts we interviewed with knowledge of state legislatures. We also considered whether a state had reliable electronic data that covered the 1996 through 2008 general elections and whether the state was able to provide the data to us within the time frame of our review. No state we considered perfectly matched Maine or Arizona across the full range of characteristics we reviewed. Table 20 summarizes some of the characteristics we used to select the four comparison states for comparison to Maine and Arizona. We conducted analyses, to the extent possible, of the four comparison states’ election data for 1996 through 2008 for comparison with Maine and Arizona to determine whether any trends or patterns observed in states with public financing were also seen in the four comparison states that do not have public financing programs. For our analyses, we compared Maine with the election outcomes of Connecticut, Montana, and South Dakota. We compared Arizona with the election outcomes of Colorado, Montana, and South Dakota. Generally, when conducting these analyses, we separated House and Senate elections and grouped Maine’s and Arizona’s election outcomes before the public financing program became available (1996 and 1998 elections) and election outcomes after public financing (2000 through 2008) with election outcomes in the comparison states during the same time periods. We measured victory margins in three ways. First, we calculated the average margin of victory for contested elections, defined in single- member districts as the difference in the number of votes between the winner and first runner up, divided by the total vote count. This measure is generally equivalent to the calculation of margin of victory in our 2003 report. For multimember districts, we defined the margin of victory as the number of votes going to the second winner minus the number of votes going to the runner up, excluding the number of votes going to the first winner from the denominator. Second, we compared whether changes in the margin of victory had an effect on the competitive nature of elections as defined by the distribution of the vote outcome between the winner and first runner-up. We compared close elections—defined as a difference of less than 10 percentage points in votes between the winning and losing candidates—with elections that were not as close—10 percentage points or more difference in votes between the winning and losing candidates. Third, we compared “landslide elections” or races with decisive winners—defined as a difference of more than 20 percentage points in votes between the winning and losing candidates—with elections that were not landslides—defined as 20 percentage points or less difference in votes between the winning and losing candidates. We measured the number of contested races by contrasting elections in which the number of candidates exceeded the number of seats available in the race with elections in which the number of candidates was equal to the number of seats available. We measured incumbent reelection rates in two ways. First, for those general election races with incumbents that were contested, we calculated the percentage of races with incumbents who won compared to all races with incumbents. Second, we calculated the percentage of individual incumbents who won, relative to all incumbents who ran. To assess whether our calculations of incumbent reelection rates were sensitive to redistricting that forced incumbents from formerly separate districts to run against each other, we calculated the individual incumbency reelection rate excluding incumbents who participated in races with more incumbents than seats. Although we were not able to assess other potential effects of redistricting on incumbent reelection rates, such as those caused by demographic changes in a candidates’ constituency, we conducted a limited analysis of geographic changes in state legislative district boundaries. We used two types of multivariate statistical methods, fixed effects regression and hierarchical loglinear models, to evaluate how the competitiveness of races in Maine and Arizona changed after the implementation of public financing programs. Although multivariate methods do not allow us to directly attribute changes in outcomes to states’ public financing programs, they do allow us to assess whether changes in Maine and Arizona were unique relative to a set of comparison states, controlling for other factors, and whether the observed changes were different from what would have occurred by chance. Our statistical models and estimates are sensitive to our choice of comparison states for Maine and Arizona, thus researchers testing different comparison states may find different results. We estimated fixed effects regression models to rule out broad groups of variables that may explain the patterns in our data without directly measuring them. Fixed effects models account for unmeasured factors that do not change over time (such as the structure of the state legislature), or that change in the same way (such as which party controls the U.S. Congress), for all states or legislative districts. This feature is particularly useful for our analysis because comprehensive and reliable data are not available on many of the factors that affect the competitiveness of state elections, such as long-term district partisanship, local economic conditions, and candidate quality. We estimated a variety of fixed effects models to gauge the sensitivity of the results to different assumptions and alternative explanations. These included the following: Models that included fixed effects for districts and each combination of state and chamber of the legislature. These models estimate the district effects separately than the state effects. Models that excluded multimember districts. These models confirm that our results are not sensitive to our choice of measure of margin of victory for multimember districts. Models that logged the margin of victory to normalize the data to account for outlying data. These models reduce the potential influence of highly uncompetitive races. Models that excluded races with no incumbent running for reelection. These models account for the possibility that term limits influenced whether a race was contested because they exclude those seats that were open because of term limits. Models that excluded races from Connecticut in 2008 when public funding became available. Full public financing was available for the first time to state legislative candidates in the 2008 elections in Connecticut. Models that excluded races in which the number of incumbents exceeded the number of available seats. These models confirm that our results are not sensitive to our definition of an incumbent “win” when more incumbents than available seats participated in a race. We included fixed effects for each year and, where appropriate, controlled for whether an incumbent was running for reelection. We estimated the models of both continuous and binary outcomes using linear probability models and robust variance estimators, due to the fact that all of our covariates are binary (i.e., all of the variables stand for the presence or absence or something, such as incumbency). We also estimated loglinear models to evaluate the changes in these outcomes in House and Senate elections in Maine, Arizona, and the four comparison states. In our analyses, we fit hierarchical models to the observed frequencies in the different four-way tables or five-way tables formed by cross-classifying each of the four outcomes by state (Arizona vs. other states and Maine vs. other states), chamber (Senate vs. House), time period (before public financing programs were available in elections prior to 2000 and after public financing programs were available in 2000 and later elections), and whether an incumbent was or was not involved in the race. We followed procedures described by Goodman (1978) and fit hierarchical models that placed varying constraints on the odds and odds ratios that are used to describe the associations of state, chamber, and time period with each outcome. Ultimately, we chose from among these different models a “preferred” model that included factors that were significantly related to the variation in each outcome and excluded those factors that were not. We are issuing an electronic supplement concurrently with this report— GAO-10-391SP. In addition to summary data on election outcomes in Maine and Arizona, the e-supplement contains additional discussion on the following issues: summary tables of the election data obtained from the four comparison states; fixed effects model assumptions, sensitivity analysis, and results; loglinear model methods and results; margin of victory measures in multimember districts; incumbency reelection rates and the potential effect of district boundary changes following the 2000 Census; and voter turnout calculations and data. To determine whether public financing encouraged more state legislative candidates to run for office, we calculated the average annual number of candidates per legislative primary and general election races for seven election cycles, including two elections preceding the public financing program—1996 and 1998—and five elections after public financing became available—2000 through 2008. Also, to determine whether there were different types of candidates running for office, we compared the percentage of races with third-party or independent legislative candidates who received at least 5 percent of votes cast for each of these seven election cycles. We chose our threshold based on research and interviews with state officials that suggested 5 percent of votes is commonly required for parties to gain access to and retain ballot placement. Ballot placement is critical in that it enables voters to use party information to make voting decisions, and allows them to see alternative party candidates at the same level as major party candidates without having to recall a specific candidate name. This definition of viability focuses on voter choice, and is distinct from whether a candidate is “electable” or competitive with other candidates. To determine changes in candidate spending, we obtained available campaign spending and independent expenditure data from Maine and Arizona. Specifically, we obtained summarized campaign spending and independent expenditure data from Maine’s Commission on Governmental Ethics and Election Practices, the state agency responsible for campaign spending reports. We found that Maine’s campaign spending data for the 1996 through 2008 election cycles and independent expenditure data for the 2000 through 2008 election cycles were sufficiently reliable. In Arizona, we obtained campaign spending and independent expenditure data from the Secretary of State’s office. Due, in part, to several upgrades to Arizona’s campaign finance data systems over the time period reviewed, we found that Arizona’s campaign spending data for the 2000 through 2008 election cycles and independent expenditure data for the 2008 election cycle were sufficiently reliable, with limitations as noted. For example, up to the 2008 election, Arizona’s campaign spending database did not include precise data to identify and link each candidate to his or her campaign finance committee(s), the entities responsible for reporting candidates’ contributions and spending. Further, the candidates’ campaign finance committees can span several election cycles and include spending reports for candidates who ran in several races for the same or different offices, such as House or Senate. Thus, to the extent possible, we matched candidates and candidate campaign finance committees through electronic and manual means, identified and calculated relevant candidate spending transactions, and sorted the data by election cycle dates. Further, although Arizona’s Secretary of State office collected independent expenditure data from 2000 through 2008, it did not collect data on the intended beneficiaries of independent expenditures until the 2008 election cycle. Therefore, we limited our analysis of independent expenditures to the 2008 elections since we could not identify which candidates benefited from the expenditure. We worked with state officials responsible for the public financing programs and campaign finance data systems in Maine and Arizona to develop our methodology for analyzing these data. These officials reviewed summaries we wrote about their respective databases and agreed that they were generally accurate and reliable. We calculated the average House and Senate legislative candidates’ spending in Maine for seven election cycles, from 1996 through 2008 and in Arizona for five election cycles, from 2000 through 2008. For comparisons across years and to observe any trends, we adjusted all candidate spending for inflation with 2008 as the base year using the Department of Commerce’s Bureau of Economic Analysis gross domestic product implicit price deflator. To assess changes in interest group influence and citizens’ confidence in government, we included questions in our interviews with candidates in Maine’s and Arizona’s 2008 elections and interviews with interest groups in both states. Also, we contracted with professional pollsters who conducted omnibus telephone surveys with representative samples of voting-age citizens in Maine and Arizona. Generally, this polling effort was designed to determine the extent to which voting-age citizens in each state were aware of their state’s public financing program and to obtain their views about whether the program has decreased the influence of interest groups, made legislators more accountable to voters, and increased confidence in government. In order to compare responses, the survey consisted of largely similar questions to those asked for our 2003 report. The questions for Maine and Arizona were identical, except for some minor wording differences customized for the respective states, as shown in table 21. Follow-up questions (e.g., questions 2, 3, and 4 in each set) were not asked of any individual who, in response to question 1, acknowledged knowing “nothing at all” about the applicable state’s clean election law or was unsure or declined to answer. Since we pretested largely similar questions with members of the general public for our 2003 report, we did not pretest questions for this effort. To conduct the Maine poll, we contracted with Market Decisions (Portland, Maine), the same polling organization that conducted the Maine poll for our 2003 report. During October 19, 2009, to November 2, 2009, the firm completed 404 telephone interviews with randomly selected adults throughout Maine. The sample of the telephone numbers called was based on a complete updated list of telephone prefixes used throughout the state. The sample was generated using software designed to ensure that every residential number has an equal probability of selection. When a working residential number was called, an adult age 18 or older in the household was randomly selected to complete the interview. The 404 completed interviews represent a survey response rate of 42.5 percent. To conduct the Arizona poll, we contracted with Behavior Research Center, Inc. (Phoenix, Arizona), the same polling organization that conducted the Arizona poll for our 2003 report. During September 9 through 18, 2009, the firm completed telephone interviews with 800 heads of households in Arizona. To ensure a random selection of households proportionately allocated throughout the sample universe, the firm used a computer-generated, random digit dial telephone sample, which selected households based on residential telephone prefixes and included all unlisted and newly listed households. Telephone interviewing was conducted during approximately equal cross sections of daytime, evening, and weekend hours—a procedure designed to ensure that all households were equally represented regardless of work schedules. Up to five separate attempts were made with households to obtain completed interviews. The 800 completed interviews represent a survey response rate of 42.98 percent. All surveys are subject to errors. Because random samples of each state’s population were interviewed in these omnibus surveys, the results are subject to sampling error, which is the difference between the results obtained from the samples and the results that would have been obtained by surveying the entire population under consideration. Measurements of sampling errors are stated at a certain level of statistical confidence. The maximum sampling error for the Maine survey at the 95 percent level of statistical confidence is plus or minus 6.7 percent. The maximum sampling error for the Arizona survey at the 95 percent level of statistical confidence is plus or minus 5 percent. To examine changes in voter participation, we reviewed information about voter turnout data from the Census Bureau, Federal Election Commission, United States Election Assistance Commission, the American National Election Studies, and other resources, including two repositories of elections data and information—George Mason University’s United States Election Project (the Elections Project) and the Center for the Study of the American Electorate. We identified these sources through our review of the literature and through discussions with researchers. To determine the extent to which changes in voter participation could be assessed over time, we reviewed documentation and research on these potential data sources, including information on collection and measurement of the voting-age population (VAP) or voting-eligible population (VEP) and the type of turnout recorded. Finally, we examined data and methodologies for measuring changes in voter turnout and other forms of participation to determine whether changes in participation could be precisely measured at the state level. We found that the different data sources required to calculate changes in turnout are not always comparable across sources and over time, because of differences in the way that data are collected or changes in how turnout is defined. As such, there was no need to conduct electronic testing to further assess the reliability of the data for our purposes. This does not indicate that the data are unreliable for other purposes. We also discussed voter turnout calculations with state officials and researchers. Additional detail about our work related to voter participation is included in the e-supplement to this report— GAO-10-391SP. We conducted this performance audit from November 2008 through May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Maine voters, by a margin of 56 percent to 44 percent, passed the Maine Clean Election Act (Maine’s Act) in November 1996. Arizona voters, by a margin of 51 percent to 49 percent, passed the Citizens Clean Elections Act (Arizona’s Act) in November 1998. These ballot initiatives established optional financing programs for candidates desiring to use public funds to finance their campaigns as an alternative to traditional fundraising means, such as collecting contributions from individuals or political action committees. The Maine and Arizona programs were the first instances of state programs that offered public funding intended to fully fund most campaign costs of participating candidates seeking state legislature seats and certain statewide offices. Both states’ public financing programs became available for candidates beginning with the elections in 2000. Generally, participating candidates—those candidates who forgo private fund raising and who otherwise qualify to take part in the respective state’s public financing program—are to receive specified amounts of money for their primary and general election campaigns. Under Maine’s and Arizona’s laws, nonparticipating candidates—those candidates who choose to continue using traditional means for financing campaigns—are subject to certain limits on contributions and reporting requirements. This appendix provides an overview of the public financing programs for legislative election campaigns in Maine and Arizona. Detailed information is available on the Web sites of the state agencies responsible for administering the respective program—Maine’s Commission on Governmental Ethics and Election Practices (www.state.me.us/ethics) and Arizona’s Citizens Clean Elections Commission (www.azcleanelections.gov). Other than noting that the public financing program is an alternative financing option available to certain candidates, Maine’s Act has no section that specifically details the purposes, goals, or objective of the law. To get the initiative on the ballot, a coalition of interest groups, the Maine Voters for Clean Elections, collected about 65,000 signatures. At that time, the coalition and other proponents advertised that the public financing program would “take big money out of politics” by limiting what politicians spend on campaigns, reducing contributions from special interests, and increasing enforcement of election laws. They said that the initiative, if passed, would decrease the influence of wealthy individuals, corporations, and political action committees in politics, and would level the playing field so that challengers would have a chance against incumbents. They asserted that politicians would then spend more time focusing on the issues that affect all of their constituents rather than spend time on pursuing money for their campaigns. Further, proponents also advertised that the public financing program would allow candidates who do not have access to wealth the opportunity to compete on a more equal financial footing with traditionally financed candidates, restore citizen’s faith and confidence in government, and give new candidates the opportunity to run competitively against incumbents. In 2003 we reported that according to Maine state officials and interest group representatives we interviewed there was not any organized opposition to the initiative when it was on the ballot. A 2007 report by the Maine Commission on Governmental Ethics and Election Practices reaffirmed that the goals of the program are generally to increase the competitiveness of elections; allow participating candidates to spend more time communicating with voters; decrease the importance of fundraising in legislative and gubernatorial campaigns; reduce the actual and perceived influence of private money in elections; control the increase of campaign spending by candidates; and allow average citizens a greater opportunity to be involved in candidates’ campaigns. In Maine, candidates who wish to participate in the state’s public financing option and receive funds for campaigning must first be certified as a Maine Clean Election Act candidate. Certified candidates, among other things, must (1) forgo self-financing and all private contributions, except for a limited amount of “seed money,” which are funds that may be raised and spent to help candidates with the qualifying process prior to certification, and (2) demonstrate citizen support by collecting a minimum number of $5 contributions from registered voters. For example, as table 22 shows, to qualify for public financing during the 2008 election cycle, a candidate in a Maine House race had to gather $5 qualifying contributions from at least 50 registered voters, and could raise no more than $500 of seed money. After being certified by the state as having met qualifying requirements, participating candidates receive initial distributions (predetermined amounts) of public funding and are also eligible for additional matching funds based on spending by privately funded opponents in conjunction with independent expenditures against the candidate or on behalf of an opponent. For example, in Maine’s 2008 election each participating candidate in a contested race for the House of Representatives (i.e., a race with more than one candidate per seat in contention) received an initial distribution of public funds in the amount of $1,504 for the primary election and an amount of $4,144 for the general election. Also, under Maine’s Act, the maximum allowable matching funds available to a participating candidate in a legislative race were capped at double the initial distribution that the candidate received for his or her contested race, as shown in table 23. Under Maine’s Act, prior to being amended in 2009, the commission was required to recalculate the initial distribution amounts at least every 4 years, based upon average expenditures in similar races in the two previous election cycles. Under this statute, the commission was not required to recalculate the initial distribution amounts in 2008 and intended to use the same payment amounts as in 2006. However, according to a state official, due to a shortfall in the state’s General Fund budget, the Maine State Legislature approved a 5 percent reduction in the general election distribution amounts, which took effect in the 2008 legislative elections. Beginning in the 2012 elections, in response to a 2009 amendment, the state will be required to recalculate the initial distribution amounts every 2 years taking into account several factors such as average candidate spending and increases in costs of campaigning. Matching funds are triggered when required reports show that the sum of a privately funded opponent’s expenditures or obligations, contributions and loans, or fund revenues received exceeds a participating candidate’s sum of fund revenues. Further, the calculation used to assess whether matching funds are triggered is to include reported independent expenditures that benefit an opponent’s campaign. Generally, independent expenditures are any expenditures made by individuals or groups, other than by contribution to a candidate or a candidate’s authorized political committee, for any communication (such as political ads or mailings) that expressly advocates the election or defeat of a clearly identified candidate. During the final weeks before an election, the definition of independent expenditure expands beyond express advocacy to include a broader range of communications directed to the public. In 2008, a total of about $3 million in public funds was authorized for the 332 participating candidates who ran in the Maine primary or general elections for state legislature. Various revenue sources are used to support Maine’s public financing program. As table 24 shows, appropriations were the largest funding source in Maine in 2008. Table 24 also indicates that in 2008, about 6 percent of Maine’s funding came from state income tax checkoff donations and other voluntary donations. This included $194,970 in funding from state income tax checkoff donations, which represented about 7 percent of the 665,503 total returns filed in tax year 2007 in the state. Maine’s Act utilizes a commission, the Maine Commission on Governmental Ethics and Election Practices, to implement the public financing program and enforce provisions of the act. The commission consists of five members appointed by the Governor. Nominees for appointment to the commission are subject to review by the joint standing committee of the state legislature having jurisdiction over legal affairs and to confirmation by the state legislature. The commission is to employ a director and a staff to carry out the day-to-day operations of the program. In addition to financing election campaigns of candidates participating in the public financing program, the Maine Clean Election Fund is also to pay for administrative and enforcement costs of the commission related to Maine’s Act. In 2008, the commission’s total expenditures from the fund were $3,357,472, including $552,426 in personnel, technology, and other administrative costs. Before the passage of Maine’s Act, political campaigns were financed completely with private funds. There were no limitations placed on expenditures by candidates from their personal wealth. Under Maine’s Act, nonparticipating candidates are not limited in the amount they may spend from their personal financial resources on their own campaigns. While not faced with limits on the total amount of money that they can raise or spend, nonparticipating candidates are subject to certain limitations on the amount that an individual, corporation, or political committee can contribute to the campaigns of nonparticipating candidates. In the 2008 elections, for example, a nonparticipating candidate for the state legislature could accept up to $250 from a donor per election. Prior to the 2000 election, the candidates could have collected up to $1,000 from individuals and up to $5,000 from political committees and corporations. Additional reporting requirements are placed on nonparticipating candidates who have one or more participating opponents. For example, a nonparticipating candidate with a participating opponent must notify the commission when receipts, spending, or obligations exceed the initial allocation of public funds paid to the participating opponent. Further, the nonparticipating candidate must file up to three additional summary reports so that the commission can calculate whether the participating opponent is entitled to receive any matching funds. Under Maine law, individuals or organizations making independent expenditures in excess of $100 during any one candidate’s election must file reports with the state. Reporting requirements for independent expenditures are important for helping to determine if matching funds are triggered. Independent expenditures are generally defined as any expenditure for a communication, such as campaign literature or an advertisement that expressly advocates the election or defeat of a clearly identified candidate that is made independently of the candidate, the candidate’s committee, and any agents of the candidate. However, under Maine’s campaign finance laws, expenditures by a group or individual to design, produce, or disseminate a communication to support or oppose a clearly identified candidate during the final weeks before an election with a participating candidate will be presumed to be independent expenditures, even if the communication does not expressly advocate a candidate’s election or defeat. This “presumption period” was first implemented in the 2004 Maine election. In 2008, the presumption period was 21 days before a primary election and 35 days before a general election. The law relating to the presumption period for a general election increased the period from 21 to 35 days in 2007. As table 25 shows, Maine has reporting requirements based upon the amount and timing of the independent expenditures to help ensure that participating candidates receive any additional matching funds they may be eligible for in a timely manner, particularly in the days before the election. Arizona’s Act contains a “findings and declarations” section that addresses the intent of the program. Specifically, the “findings” subsection of the Citizens Clean Elections Act, passed by voters in 1998, noted that the state’s then-current election financing-system allows elected officials to accept large campaign contributions from private interests over which they have governmental jurisdiction; gives incumbents an unhealthy advantage over challengers; hinders communication to voters by many qualified candidates; effectively suppresses the voices and influence of the vast majority of Arizona citizens in favor of a small number of wealthy special interests; undermines public confidence in the integrity of public officials; costs average taxpayers millions of dollars in the form of subsidies and special privileges for campaign contributors; drives up the cost of running for state office, discouraging otherwise qualified candidates who lack personal wealth or access to special interest funding; and requires that elected officials spend too much of their time raising funds rather than representing the public. Further, the “declarations” subsection of Arizona’s 1998 Act stated that: “the people of Arizona declare our intent to create a clean elections system that will improve the integrity of Arizona state government by diminishing the influence of special interest money, will encourage citizen participation in the political process, and will promote freedom of speech under the U.S. and Arizona Constitutions. Campaigns will become more issue-oriented and less negative because there will be no need to challenge the sources of campaign money.” As in Maine, Arizona candidates who wish to participate in the state’s public financing option and receive funds for campaigning must first be certified as a Clean Election candidate. Certified candidates, among other things, must (1) forgo self-financing and all private contributions, except for a limited amount of “early contributions,” which are funds that may be raised and spent to help candidates with the qualifying process prior to certification, and (2) demonstrate citizen support by collecting a set number of $5 contributions from registered voters. To qualify for public financing during the 2008 election cycle, a candidate for Arizona’s House of Representatives had to gather at least 220 qualifying $5 contributions, and could collect no more than $3,230 in early contributions, as shown in table 26. After being certified by the state as having met qualifying requirements, participating candidates receive initial distributions (predetermined amounts) of public funding and are also eligible for additional matching funds when an opposing nonparticipating candidate exceeds the participating candidate primary or general election spending limits. In Arizona’s 2008 elections, each participating candidate for the House of Representatives or Senate who was in contested party primary elections (i.e., races with more than one candidate per seat in contention) received an initial distribution of public funds in the amount of $12,921. After the primary, successful major party candidates who were in a contested general election race then received an additional $19,382, as shown in table 27. Independent candidates received 70 percent of the sum of the original primary and general election spending limits. Unopposed candidates (i.e., those in an uncontested race) received only the total of their $5 qualifying contributions as the spending limit for that election. Participating candidates also received additional matching funds up to predetermined limits when an opposing nonparticipating candidate exceeded the primary or general election spending limits. Matching funds were also provided to participating candidates when independent expenditures were made against them or in favor of opposing candidates in the race. The calculation to assess whether matching funds for participating candidates are triggered is to include reported independent expenditures that, in general, are made on behalf of nonparticipating or another participating candidate in the race by individuals, corporations, political action committees, or other groups. A January 2010 federal district court ruling held Arizona’s Citizens Clean Election Act to be unconstitutional. More specifically, the U.S. District Court for the District of Arizona held that Arizona’s matching funds provision burdens First Amendment speech protections, is not supported by a compelling state interest, is not narrowly tailored, is not the least restrictive alternative, and is not severable from the rest of the statute thereby rendering the whole statute unconstitutional. On May 21, 2010, the U.S. Court of Appeals for the Ninth Circuit reversed the district court ruling on the basis that the matching funds provision imposes only a minimal burden on First Amendment rights, and bears a substantial relationship to the state’s interest in reducing political corruption. In total, about $6 million in public funds was distributed in 2008 to the 120 participating candidates for the Arizona legislature. Arizona’s public financing program is supported through various revenue sources. As table 28 shows, a surcharge on civil and criminal fines and penalties was the largest funding source. Table 28 also indicates that in 2008, $6.6 million, or about 39 percent of the fund’s revenue, came from state income tax checkoff donations and other voluntary donations. Arizona’s Act established the Citizens Clean Elections Commission to implement the public financing program and enforce provisions of the act. The commission consists of five members selected by the state’s highest- ranking officials of alternating political party affiliation. No more than two commissioners may be from the same political party or county, and commissioners may not have run for or held office, nor been appointed to or elected for any office for the 5 years prior to being chosen as a commissioner. One new commissioner is to be appointed each year. As established by Arizona’s Act, the commission is to employ an Executive Director to facilitate administration of the program. The Executive Director is responsible for, among other things, educating and assisting candidates in compliance with the act’s requirements, limits, and prohibitions, assisting candidates in participating and obtaining public funding, as well as determining additional staffing needs and hiring accordingly. Arizona’s Act caps commission spending for a calendar year at $5 times the number of Arizona resident personal income tax returns filed the previous calendar year. Of that amount, the commission may use up to 10 percent for administration and enforcement activities and must use 10 percent or more for voter education activities. The remainder of commission spending goes to participating candidates’ campaign funds. In calendar year 2008, the commission’s expenditures totaled $14,741,043— $850,447 for administration and enforcement, $6,179,857 for voter education, and $7,710,739 for campaign funds. With regard to the $7,710,739 spent for campaign funds in 2008, $1,715,395 was for statewide candidates and $5,995,344 was for legislative candidates. The commission’s responsibility for administering and enforcing Arizona’s Act covers related contribution limits, spending limits, and reporting requirements that affect both participating and nonparticipating candidates. Cases of possible violations may be initiated with the commission in one of two ways: (1) either by an external complaint or (2) through information that comes to the commission’s attention internally. The commission may assess civil penalties after investigating compliance matters and finding probable cause of a violation unless the candidate comes into compliance within a set time frame or settlement agreement is reached. Under certain circumstances, the commission can remove a legislator from office for violating specified Arizona Clean Elections Act spending or contribution limits. For example, the commission, for the first time, acted to remove a state legislator from office for exceeding spending limits by over 10 percent—about $6,000—in his publicly funded election campaign during the 2004 primary election. The commission’s action was upheld by an Arizona administrative law judge and an appeal by the legislator was unsuccessful in the Arizona court system. Before the passage of Arizona’s Act, political campaigns in Arizona were financed completely with private funds. There were no limitations placed on expenditures by candidates from their personal wealth. Under Arizona’s public financing laws, nonparticipating candidates are not limited in the amount they may spend from their personal financial resources on their own campaigns. While not faced with limits on the total amount they can spend on their own campaigns, nonparticipating candidates are subject to certain limitations on the amounts of contributions they can accept. For example, in Arizona, contributions from individuals were limited to $488 per donor for nonparticipating candidates for the state legislature for the 2008 election cycle. The Arizona act reduced the limits on individual contributions to nonparticipating candidates by 20 percent. Nonparticipating candidates have additional reporting requirements. For example, a nonparticipating candidate opposed by a participating candidate, must, in general, file a report with the Secretary of State if the campaign’s expenditures before the primary election exceed 70 percent of the original primary election spending limit imposed on a participating opponent or if the contributions to a nonparticipating candidate has exceeded 70 percent of the original general election spending limit. Under Arizona law, individuals or organizations making independent expenditures must file reports with the Secretary of State. According to commission officials, the commission coordinates with the Secretary of State to determine if participating candidates are eligible for matching funds based upon independent expenditures opposing participating candidates or supporting other candidates in their race. Under Arizona law, independent expenditures are generally defined as expenditures such as campaign literature or advertisements that expressly advocate the election or defeat of a clearly identified candidate that is made independently of the candidate, the candidate’s committee, and any agents of the candidate. As table 29 shows, the amount and timing of the independent expenditure in relation to the election dictates when and how the independent expenditure must be reported. In addition, according to commission and state officials, Arizona has made changes intended to improve and clarify the process of reporting independent expenditures, given their importance in determining matching fund disbursements under the public financing program. For example, these officials told us that they made a number of updates to the office’s campaign finance system for the 2008 election to help improve the reporting and tracking of independent expenditures and the timely disbursement of matching funds to participating candidates. One update required individuals or committees making independent expenditures to report the unique identification number of the candidate that is the beneficiary of an independent expenditure. A Secretary of State official told us that prior to the 2008 election the beneficiary of the independent expenditure was inconsistently identified in a text field, and there was no systematic way to distinguish independent expenditures made on behalf of specific candidates or ballot initiatives. After Maine voters passed the Maine Clean Election Act in November 1996 and Arizona voters passed the Citizens Clean Elections Act in November 1998, a similar public financing law (Connecticut’s Act) was introduced in the Connecticut state legislature in October 2005 and enacted in December 2005, establishing the Citizens’ Election Program. Connecticut’s Act established an optional financing program for candidates for the state legislature beginning in 2008 and certain additional statewide offices beginning in 2010 to use public funds to finance their campaigns as an alternative to traditional fundraising means, such as collecting contributions from individuals or political action committees. In addition, the New Jersey Fair and Clean Elections Pilot Project was enacted into law in August 2004 and the 2007 New Jersey Fair and Clean Elections Pilot Project Act was enacted into law in March 2007. These acts respectively established pilot projects offering optional public financing of campaigns for candidates seeking election to the General Assembly from certain legislative districts for the 2005 election and for candidates seeking election to the General Assembly and the Senate from certain legislative districts in the 2007 election. Detailed information is available on the Web sites of the state agencies responsible for administering the respective programs—Connecticut’s State Elections Enforcement Commission (SEEC) (www.ct.gov/seec/site/default.asp) and New Jersey’s Election Law Enforcement Commission (www.elec.state.nj.us). Connecticut’s Act established the Citizens’ Election Program, which offered full public financing for participating candidates for the House of Representatives and Senate of the state legislature, also known as the General Assembly, beginning in 2008, and will expand to certain statewide offices, such as governor and attorney general, beginning in 2010. Connecticut’s State Elections Enforcement Commission (SEEC) outlined the following goals of the public financing program: to allow candidates to compete without reliance on special interest money, to curtail excessive spending and create a more level playing field among candidates, to give candidates without access to sources of wealth a meaningful opportunity to seek elective office in Connecticut, and to provide the public with meaningful and timely disclosure of campaign finances. In Connecticut, candidates for the state legislature who wish to receive public funds for campaigning must qualify by, among other things, (1) raising a certain total amount of money, in amounts between $5 and $100, in qualifying contributions from individuals, and (2) collecting a certain number of these qualifying contributions from individuals who reside in the district for which the candidate seeks legislative office, as shown in table 30. In addition, candidates can contribute a limited amount of personal funds to their candidate committees before applying for the initial distribution of public funds. The maximum amount of personal funds allowed per candidate varies depending on the office sought. Any allowable amount of personal funds a candidate contributes is not considered as part of the qualifying contributions, and reduces the initial distribution by a corresponding amount. After meeting the requisite qualifications and meeting the ballot requirements administered by the Secretary of State, participating candidates from major political parties may receive initial distributions of public funding as shown in table 31. Minor party candidates can receive varying amounts of public funding depending on whether they meet certain requirements. For elections held in 2010 and thereafter, SEEC is to adjust the amount of public funding for legislative candidates every 2 years to account for inflation. Participating candidates are also eligible for supplemental public funding up to certain specified amounts, based on spending by nonparticipating or participating opposing candidates whose aggregate contributions, loans, or other funds received or spent exceed the applicable spending limit—the amount of qualifying contributions plus applicable full initial distribution for a participating candidate in that race. In addition, on the basis of required independent expenditure reports or a SEEC determination, a participating candidate can also receive additional matching funds, in general, if such independent expenditures are made with the intent to promote the defeat of the participating candidate. Such additional funds are to be equal to the amount of the independent expenditure but may not exceed the amount of the applicable primary or general election grant for the participating candidate. If such independent expenditures are made by an opposing nonparticipating candidate’s campaign, additional matching funds are only to be provided if the nonparticipating candidate’s campaign expenditures plus the amount of independent expenditures, exceed the applicable initial public funding amount for the participating candidate. The primary revenue source for Connecticut’s public financing program is derived from the sale of abandoned or unclaimed property in the state’s custody, such as funds left in saving or checking accounts; stocks, bonds, or mutual fund shares; and life insurance policies. As of March 3, 2010, the Citizens’ Clean Election Fund has about $43 million, a fund established by Connecticut’s Act for the public financing program. In addition, the Citizens’ Election fund receives funds from voluntary contributions and interest earned on the fund’s assets, and if the amount from the sale of abandoned or unclaimed property is less than the amounts specified under state law to be transferred to the Citizens’ Election Fund, then the difference is to be made up from corporation business tax revenues. During the 2008 election cycle, about $8.3 million was distributed to 250 participating candidates in the general election and about $3 million was expended for SEEC administrative costs. About three-fourths (250 of 343) of legislative candidates in Connecticut’s general election participated in the public financing program, and there was at least one participating candidate in over 80 percent of the races, as shown in table 32. Of the participating legislative candidates in Connecticut’s general election, more than half, or 130 of 250 candidates, were incumbents. Of those participating candidates who were elected to office, about 95 percent of the incumbents were elected, or 123 of 130 participating incumbent candidates, and about 23 percent of the challengers were elected, or 28 of 120 participating challenger candidates, as shown in table 33. In 2004, the New Jersey Fair and Clean Elections Pilot Project was enacted into law and established an optional public financing program for General Assembly candidates in two legislative districts in the 2005 general election. Under New Jersey’s 2005 public financing program for legislative candidates, the state Democratic party chairperson and the state Republican chairperson each chose one district to participate in the program. In one of the selected districts, two out of the four candidates for the state Assembly qualified for public funds, and in the other district, no candidates qualified. In 2007, the state legislature expanded the number of districts covered by the program to Senate and General Assembly candidates in three legislative districts and made several changes in the program, such as decreasing the number of contributions each candidate was required to collect from registered voters in his or her district. The goals of the 2007 New Jersey Clean Elections Pilot Project are to end the undue influence of special interest money, to improve the unfavorable opinion of the political process, and to “level the playing field” by allowing ordinary citizens to run for office. To participate in the 2007 Clean Elections Pilot Project, candidates needed to, among other things, (1) file a declaration of intent to seek certification with the New Jersey Election Law Enforcement Commission (ELEC), the agency responsible for the public financing program; (2) agree to participate in at least two debates; and (3) after certification, limit their expenditures to the amounts raised as “seed money” and qualifying contributions, and public funds received from the fund. During the qualifying period, candidates may accept seed money contributions of $500 or less from individuals registered to vote in New Jersey, but in aggregate seed money contributions may not exceed $10,000. A candidate seeking certification must obtain at least 400 contributions of $10 (i.e., $4,000) to receive the minimum amount of public funds available and at least 800 contributions of $10 (i.e., $8,000) to receive the maximum amount of public funds. The contributions must be from registered voters from the legislative district in which the candidate is seeking office. In addition, if two state Assembly candidates from the same party are running in the same legislative district, they both must agree to participate in the public financing program to become certified and eligible to receive public funds. The amount of public funds received by a certified candidate depended upon several criteria: (1) whether or not the candidate is opposed, (2) whether or not the candidate is a major party candidate, and (3) whether the candidate ran in a “split” district, one that, in general, was selected jointly by members of the majority and minority parties in the legislature. After being certified, a candidate nominated by a political party who has also received at least 400 qualifying contributions would receive a grant amount of $50,000 if opposed and $25,000 if unopposed. If the candidate were running in a “competitive” district, then such a candidate could collect funding in equal proportion to the number of remaining qualifying contributions (after the initial 400) up to a maximum of 800 qualifying contributions for a total amount of public funds not to exceed the average amount of money spent by all candidates in the two preceding general elections for those offices. If a candidate is running in one of the two “nonsplit” districts, that is, one district selected by the members of the majority political party, and one district selected by the members of the minority political party, then the candidate could collect funding in equal proportion to the number or remaining qualifying contributions (after the initial 400) up to 800 qualifying contributions for a total not to exceed $100,000. Qualifying contribution amounts received would be deducted from grant amounts. For example, if a candidate raised 400 $10 qualifying contributions, the amount dispersed to the candidate would be $46,000 ($50,000 minus $4,000 collected in qualifying contributions). Participating candidates may also receive additional funds under certain circumstances. When a campaign report of a nonparticipating candidate shows that the aggregate amount of contributions exceeds the amount of money provided to an opposing participating candidate, ELEC may authorize an additional amount of money equivalent to the excess amount, up to a maximum of $100,000 to each opposing participating candidate in the same district as the nonparticipating candidate. In addition, when a participating candidate files a written and certified complaint to ELEC, and ELEC determines that (1) a nonparticipating candidate is benefiting from money spent independently on behalf of the nonparticipating candidate or that (2) a participating candidate is the subject of unfavorable campaign publicity or advertisements by an entity not acting in concert with the opposing nonparticipating candidate, ELEC may authorize an additional amount of money up to a maximum of $100,000 to the opposing participating candidate in the same legislative district who is not benefiting from the expenditure. For the 2007 pilot project, the New Jersey state legislature funded the program with approximately $7.7 million from the state’s general funds. In addition, voluntary donations, earnings received from the investment of money in the fund, and fines and penalties collected for violations of the public financing program are also sources of revenue. All unspent money is to be returned to the fund. About $4 million was distributed to participating candidates for the 2007 pilot project. According to a state official, New Jersey’s public financing program, which contains matching funds provisions, was not reauthorized for the 2009 elections due to both concerns about a federal district court ruling holding that the matching funds provisions of Arizona’s Citizens Clean Elections Act to be unconstitutional, as well as state budget constraints. In the 2007 Pilot Program, 16 of the 20 legislative candidates running for office in the three legislative districts participated in the program, and every winning candidate participated. Two of the 16 participating candidates received funds in addition to their initial distribution of public funds due to independent expenditures made on behalf of opposing nonparticipating candidates. In addition to the contact named above, Mary Catherine Hult, Assistant Director; Nancy Kawahara; Geoff Hamilton; Tom Jessor; Grant Mallie; Heather May; Amanda Miller; Jean Orland; Anna Maria Ortiz; Doug Sloane; Michelle Su; Jeff Tessin; Adam Vogt; and Monique Williams made significant contributions to this report. Berry, William D., Michael B. Berkman, and Stuart Schneiderman. “Legislative Professionalism and Incumbent Reelection: The Development of Institutional Boundaries.” American Political Science Review, vol. 94, no. 4 (December 2000). Carey, John M., Richard G. Niemi, and Lynda W. Powell. “Incumbency and the Probability of Reelection in State Legislative Elections.” The Journal of Politics, vol. 62, no. 3 (August 2000). Carsey, Thomas M., Richard G. Niemi, William D. Berry, Lynda W. Powell, and James M. Snyder, Jr. “State Legislative Elections, 1967-2003: Announcing the Completion of a Cleaned and Updated Dataset.” State Politics and Policy Quarterly, vol. 8, no. 4 (Winter 2008). Duff, Brian, Michael J. Hanmer, Won-ho Park, and Ismail K. White. “How Good is This Excuse: Correcting the Over-reporting of Voter Turnout in the 2002 National Election Study.” American National Election Studies Technical Report No. nes010872 (August 2004). Francia, Peter L., and Paul S. Herrnson. “The Impact of Public Finance Laws on Fundraising in State Legislative Elections.” American Politics Research, vol. 31, no. 5 (September 2003). Gross, Donald A., and Robert K. Goidel. The States of Campaign Finance Reform. Columbus, Ohio: The Ohio State University Press, 2003. Gross, Donald A., Robert K. Goidel, and Todd G. Shields. “State Campaign Finance Regulations and Electoral Competition.” American Politics Research, vol. 30, no. 2 (March 2002). Hamm, Keith E., and Robert E. Hogan. “Campaign Finance Laws and Candidacy Decisions in State Legislative Elections.” Political Research Quarterly, vol. 61, no. 3 (September 2008). Hogan, Robert E. “State Campaign Finance Laws and Interest Group Electioneering Activities.” The Journal of Politics. vol. 67, no. 3 (August 2005). La Raja, Ray J., and Mathew Saradjian. “Clean Elections: An Evaluation of Public Funding for Maine Legislative Contests.” Center for Public Policy and Administration, University of Massachusetts Amherst, 2004. Levin, Steven M. Keeping It Clean: Public Financing in American Elections. Los Angeles, California: Center for Governmental Studies, 2006. Malbin, Michael J., and Thomas L. Gais. The Day After Reform, Sobering Campaign Finance Lessons from the American States. Albany, New York: The Rockefeller Institute Press, 1998. Malhotra, Neil. “The Impact of Public Financing on Electoral Competition: Evidence from Arizona and Maine.” State Politics and Policy Quarterly, vol. 8, no. 3 (Fall 2008). Mayer, Kenneth R. Declaration to the United States District Court District of Arizona. Filed 9/23/08, CV-08-01550-PHX-ROS. Mayer, Kenneth R., Timothy Werner, and Amanda Williams. “Do Public Funding Programs Enhance Electoral Competition?” in Michael P. McDonald and John Samples, eds. The Marketplace of Democracy, Electoral Competition and American Politics. Washington, D.C.: Cato Institute and Brookings Institution Press, 2006. McDonald, Michael P., Alan Abramowitz, Brad Alexander, and Matthew Gunning. “Drawing the Line on District Competition/Drawing the Line on District Competition: A Rejoinder.” PS, Political Science & Politics, vol. 39, no. 1 (January 2006). McDonald, Michael P., and John Samples, eds. The Marketplace of Democracy, Electoral Competition and American Politics. Washington, D.C.: Cato Institute and Brookings Institution Press, 2006. Miller, Michael. “Gaming Arizona: Public Money and Shifting Candidate Strategies.” PS Online, vol. 41, no. 3 (July 2008). Miller, Michael G. “The Fifth Source and the Ballot Box: Public Money, Candidate Time, and Changing American Elections.” Paper presented at the annual meeting of the Midwest Political Science Association, Chicago, April 2009. Moncrief, Gary F., Richard G. Niemi, and Lynda W. Powell. “Time, Term Limits, and Turnover: Trends in Membership Stability in U.S. State Legislatures.” Legislative Studies Quarterly, vol. XXIX, no. 3 (August 2004). Niemi, Richard G., Simon Jackman, and Laura R. Winsky. “Candidacies and Competitiveness in Multimember Districts.” Legislative Studies Quarterly, vol. XVI, no. 1 (February 1991). Primo, David M. Declaration to the United States District Court District of Arizona. CV-08- 01550-PHX-ROS. Samples, John, ed. Welfare for Politicians? Taxpayer Financing of Campaigns. Washington, D.C.: Cato Institute, 2005. Squire, Peverill. “Legislative Professionalization and Membership Diversity in State Legislatures.” Legislative Studies Quarterly, vol. XVII, no. 1 (February 1992). Thompson, Joel A., and Gary F. Moncrief, eds. Campaign Finance in State Legislative Elections. Washington, D.C.: Congressional Quarterly, Inc., 1998.
The 2000 elections in Maine and Arizona were the first in the nation's history where candidates seeking state legislative seats had the option to fully fund their campaigns with public moneys. In 2003, GAO reviewed the public financing programs in Maine and Arizona and found the programs' goals were to (1) increase electoral competition; (2) increase voter choice; (3) curb increases in campaign costs; (4) reduce interest group influence; and (5) increase voter participation. GAO reported that while the number of candidates who participated in the programs increased from 2000 to 2002, it was too soon to determine the extent to which these five goals of the programs were being met. Senate Report 110-129 directed GAO to update its 2003 report. This report: (1) provides data on candidate participation and (2) describes changes in five goals of Maine's and Arizona's programs in the 2000 through 2008 elections and the extent to which changes could be attributed to the programs. To address its objectives, GAO analyzed available data about candidate participation, election outcomes, and campaign spending for the 1996 through 2008 legislative elections in both states, reviewed studies, and interviewed 22 candidates and 10 interest group officials selected to reflect a range of views. The interview results are not generalizable to all candidates or all interest groups. GAO is issuing an electronic supplement with this report--GAO-10-391SP--which provides data and summaries of statistical analyses conducted. In Maine and Arizona, legislative candidates' participation in the public financing programs, as measured by the percentage of candidates participating and the proportion of races with a participating candidate, increased from 2000 to 2008. Specifically, the participation rate of candidates in Maine's general elections increased from 33 percent in 2000 to over 80 percent in 2006 and 2008. Meanwhile, the participation rate of candidates in Arizona's general elections increased from 26 percent in 2000 to 64 percent in 2008. Also, the proportion of races with at least one candidate participating in the program generally increased from 2000 through 2008. While there was some evidence of statistically significant changes in one of the five goals of Maine's and Arizona's public financing programs, we could not directly attribute these changes to the programs, nor did we find significant changes in the remaining four goals after program implementation. Specifically, there were statistically significant decreases in one measure of electoral competition--the winner's margin of victory--in legislative races in both states. However, GAO could not directly attribute these decreases to the programs due to other factors, such as the popularity of candidates, which affect electoral outcomes. We found no change in two other measures of competition, and there were no observed changes in voter choice--the average number of legislative candidates per district race. In Maine, decreases in average candidate spending in House races were statistically significant, but a state official said this was likely due to reductions in the amounts given to participating candidates in 2008, while average spending in Maine Senate races did not change. In Arizona, average spending has increased in the five elections under the program. There is no indication the programs decreased perceived interest group influence, although some candidates and interest group officials GAO interviewed said campaign tactics changed, such as the timing of campaign spending. Data limitations, including a lack of comparable measures over time, hinder analysis of changes in voter participation.
FAA is responsible for ensuring a safe, secure, and efficient airspace system that contributes to national security and the promotion of U.S. airspace. To fulfill these key missions, FAA administers a wide range of aviation-related programs, such as those to certify the airworthiness of new commercial aircraft designs, to inspect airline operations, to maintain airport security, and to control commercial and general aviation flights. Integral to executing each of FAA’s programs are extensive information processing and communications technologies. For example, each of FAA’s 20 en route air traffic control facilities, which controls aircraft at the higher altitudes between airports, depends on about 50 interrelated computer systems to safely guide and direct aircraft. Similarly, each of FAA’s almost 100 flight standards offices, responsible for inspecting and certifying various sectors of the aviation industry (e.g., commercial aircraft, repair stations, mechanics, pilot training schools, maintenance schools, pilots, and general aviation aircraft), are supported by over 30 mission-related safety database and analysis systems. Because of the complexity of the systems supporting FAA’s mission, most of these systems are unique to FAA and not off-the-shelf systems that can be easily maintained by system vendors. FAA also has numerous, complex information processing interactions with various external organizations, including airlines, aircraft manufacturers, general aviation pilots, and other government agencies, such as the National Weather Service (NWS) and the Department of Defense. Over the years, these organizations and FAA have built vast networks of interrelated systems. For example, airlines’ flight planning systems are linked to FAA’s Enhanced Traffic Management System, which monitors flight plans nationwide, controls high traffic situations, and alerts airlines and airports to bring in more staff when there is extra traffic. As another example, FAA facilities rely on weather information from NWS ground sensors, radars, and satellites to control and route aircraft. FAA is headed by an administrator, who is supported by the chief counsel, assistant administrators responsible for each of its five staff offices, and associate administrators responsible for each of its seven lines of business. The chief counsel and five staff offices handle crosscutting management functions (e.g., system safety), while the seven lines of business are dedicated to specific services (e.g., airport funding). For the purpose of Year 2000 program planning, FAA refers to all of these 13 organizations as “lines of business”. Figure 1 provides a visual summary of FAA’s management structure, highlighting the 13 lines of business. In assessing actions taken by FAA to address the Year 2000 problem, our objective was to determine the effectiveness of FAA’s Year 2000 program, including the reliability of FAA’s Year 2000 cost estimate. To satisfy this objective, we reviewed and analyzed key FAA documents, including (1) Year 2000 guidance and draft performance plan documents, (2) Year 2000 memorandums and cost estimation worksheets, (3) minutes of FAA’s Year 2000 steering committee, and (4) the lines of businesses’ Year 2000 project plans, monthly status reports, and systems assessment documentation. We also reviewed the agency’s Internet Web sites for Year 2000 information and newsletter articles, plus FAA’s documentation of various Year 2000 briefings. We used our Year 2000 assessment guide to assess FAA’s readiness to achieve Year 2000 compliance. To supplement the analyses noted above, we interviewed FAA’s Year 2000 product team members, the Year 2000 program manager, FAA’s deputy chief information officer, and representatives from each of FAA’s 13 lines of business. We performed our work at the Federal Aviation Administration in Washington, D.C., the Mike Monroney Aeronautical Center in Oklahoma City, Oklahoma, and the William J. Hughes Technical Center in Atlantic City, New Jersey. Our work was performed from February through November 1997, in accordance with generally accepted government auditing standards. We requested comments on a draft of this product from the Secretary of Transportation or his designee. On January 6, 1998, we obtained oral comments from Transportation and FAA officials, including representatives from the Office of the Secretary of Transportation and FAA’s Chief Information Officer. Their comments are discussed in the “Agency Comments and Our Evaluation” section of this report. On January 1, 2000, computer systems worldwide could malfunction or produce inaccurate information simply because the date has changed. Unless corrected, such failures could have a costly, widespread impact. The problem is rooted in how dates are recorded and computed. For the past several decades, systems have typically used two digits to represent the year—such as “97” for 1997—to save electronic storage space and reduce operating costs. In such a format, however, 2000 is indistinguishable from 1900. Software and systems experts nationwide are concerned that this ambiguity could cause systems to malfunction in unforeseen ways, or to fail completely. Correcting this problem will not be easy or inexpensive, and must be done while such systems continue to operate. Many of the government’s computer systems were developed 20 to 25 years ago, use a wide array of computer languages, and lack full documentation. Systems may contain up to several million lines of software code that must be examined for potential date-format problems. The enormous challenge involved in correcting these systems is not primarily technical, however, it is managerial. Agencies’ success or failure will largely be determined by the quality of their program management and executive leadership. Top agency officials must understand the importance and urgency of this undertaking, and communicate this to all employees. The outcome of these efforts will also depend on the extent to which agencies have institutionalized key systems-development and program-management practices, and on their experience with such large-scale software development or conversion projects. Accordingly, agencies must first assess their information resources management capabilities and, where necessary, upgrade them. In so doing, they should consider soliciting the assistance of other organizations experienced in these endeavors. To assist agencies with these tasks, we have prepared a guide that discusses the scope of the challenge and offers a structured, step-by-step approach for reviewing and assessing an agency’s readiness to handle the Year 2000 problem. The guide describes in detail the following five phases, each of which represents a major Year 2000 program activity or segment: Awareness. This phase entails defining the Year 2000 problem, gaining executive level support and sponsorship, and ensuring that everyone in the organization is fully aware of the issue. Also, during this phase, a Year 2000 program team is established and an overall strategy is developed. Assessment. This phase entails assessing the Year 2000 impact on the enterprise, identifying core business areas, inventorying and analyzing the systems supporting the core business areas, and prioritizing their conversion or replacement. Also, during this phase, contingency planning is initiated and the necessary resources are identified and secured. Renovation. This phase deals with converting, replacing, or eliminating selected systems and applications. In so doing, it is important to consider the complex interdependencies among the systems and applications. Validation. This phase deals with testing, verifying, and validating all converted or replaced systems and applications and ensuring that they perform as expected. This entails testing the performance, functionality, and integration of converted or replaced systems, applications, databases, and interfaces in an operational environment. Implementation. This phase entails deploying and implementing Year 2000 compliant systems and components. Also, during this phase, data exchange contingency plans are implemented, if necessary. Institutional Year 2000 awareness is the first step in effectively addressing the Year 2000 problem. FAA has initiated awareness activities, including conducting a Year 2000 problem awareness campaign, drafting a Year 2000 agencywide plan, issuing a Year 2000 guidance document for project-level plan development, and establishing a program management organization. However, FAA fell behind in other key awareness activities, such as finalizing the agency’s Year 2000 strategy, in part due to its late designation of a program manager. FAA recognizes that the upcoming change of century poses significant challenges to the agency. It began Year 2000 problem awareness activities in May 1996, and during the ensuing 3 months established a Year 2000 product team and designated it as the focal point for Year 2000 issues within FAA. Also, FAA established a Year 2000 steering committee. Since then, the Year 2000 product team and steering committee have (1) sponsored an awareness day and held assessment and testing practices workshops, (2) set up Web pages and published a newsletter article to provide information on the Year 2000 problem, and (3) briefed FAA’s management on the agency’s Year 2000 problem. In addition, in September 1996, the product team issued the FAA Guidance Document for Year 2000 Date Conversion. This guide was intended to assist the lines of businesses within FAA in planning for the conversion of their computer systems to handle processing of dates in the year 2000 and beyond. It is essential that agencies appoint a Year 2000 program manager and establish an agency-level program office to manage and coordinate Year 2000 program activities. The problem and solutions involve a wide range of dependencies among information systems: the need to (1) centrally develop or acquire conversion and validation standards, inspection, conversion, and testing tools, (2) coordinate the conversion of crosscutting information systems and their components, (3) establish priorities, and (4) reallocate resources as needed. However, FAA did not establish a program manager who had responsibility for Year 2000 program management until July 1997. This contributed to key awareness activities being delayed. Specifically, FAA experienced delays in establishing the agencywide Year 2000 plan needed for timely initiation and effective execution of the key awareness and assessment phase activities. Because FAA was slow to designate a program manager, it is only now finalizing its agencywide plan for achieving Year 2000 compliance. The September 1997 draft of this document outlines the FAA strategy and management approach to address the Year 2000 century date change. Specifically, it defines the Year 2000 program management structure and responsibilities; adopts the five-phase management process, including the awareness, assessment, renovation, validation, and implementation phases that are being used throughout the government to manage and measure agencies’ Year 2000 programs; calls for awareness and assessment activities to be completed provides for completion of the three remaining program phases—renovation, validation, and implementation; and establishes performance indicators and reporting requirements. FAA’s Year 2000 project manager provided a draft of this plan to the Administrator on December 1, 1997, but does not have any estimate as to when this document will be signed by the Administrator and made final. Without an official agencywide Year 2000 strategy, FAA’s executive management is without a road map for achieving Year 2000 compliance. Further, the lack of an approved strategy means that FAA’s program manager lacks the authority to enforce Year 2000 policies. As a result, each line of business will have to decide if, when, and how to address its Year 2000 conversion, irrespective of agency priorities and standards. This reinforces our existing concerns with FAA’s CIO not being in the proper place in the organization to develop, maintain, and enforce information technology initiatives. We have repeatedly recommended that FAA adopt a management structure similar to that of the department-level CIOs as prescribed in the Clinger-Cohen Act. The Department of Transportation (DOT) and FAA have disagreed with this recommendation because they believe that the current location of the CIO, within the research and acquisition line of business, is effective. We disagree. FAA’s CIO does not report directly to the Administrator and thus does not have organizational or budgetary authority over those who develop air traffic control systems or the units that maintain them. Further, the agency’s long history of problems in managing information technology projects reflects the need for change. FAA has not completed key assessment activities, placing it at enormous risk of not achieving Year 2000 compliance by January 1 of that year. Specifically, FAA has not (1) assessed the severity of its Year 2000 problem and (2) completed the inventory and assessment of its information systems and their components. Also, while FAA has initiated other key assessment phase activities on individual projects, it has not completed determining priorities for system conversion and replacement, developing plans for addressing identified date dependencies, developing validation and test plans for all converted or replaced systems, addressing interface and data exchange issues among internal and external systems, and initiating the development of business continuity plans in case systems are not corrected in time. FAA states that it expects to complete assessment phase activities by the end of January 1998. Developing and publishing a high-level assessment of the severity of the Year 2000 issue provides executive management and staff with a broad overview of the potential impact the century change could have on the agency. Such an assessment provides management with valuable information on which to rank the agency’s Year 2000 activities, as well as a means of obtaining and publicizing management commitment and support for necessary Year 2000 initiatives. Unfortunately, FAA has only begun to assess the severity of the impact of Year 2000-induced failures. The Year 2000 Financial Oversight Team, established in October 1997, has been tasked with identifying the impact of Year 2000 failures on FAA’s operations, programs, and priorities. This assessment will be focused primarily on key mission critical systems and is to be provided to FAA management in February 1998. On the basis of our discussions with FAA personnel, it is clear that FAA’s ability to ensure the safety of the National Airspace System and to avoid the grounding of planes could be compromised if systems are not changed. For example, the Host Computer System, the centerpiece information processing system in FAA’s en route centers, relies on the date to determine which day of the week it is when the system is initialized. This information triggers the use of different prescheduled flight plans. That is, carriers use different flight plans and times on a Monday than they do on a Saturday. Because January 1, 2000, is a Saturday, and January 1, 1900, was a Monday, uncorrected date dependencies could lead to the Host using incorrect flight plans. This could result in delayed flights. While FAA officials stated that they believe this problem has been solved, they acknowledged that other unforeseen problems may exist because they have not yet completed assessments of the impact of the Year 2000 problem. External organizations are also concerned about the impact of FAA’s Year 2000 status on their operations. FAA recently met with representatives from airlines, aircraft manufacturers, airports, fuel suppliers, telecommunications providers, and industry associations to discuss the Year 2000 issue. At this meeting, participants raised the concern that their own Year 2000 compliance was irrelevant if FAA was not compliant because of the many system interdependencies. Airline representatives further explained that flights could not even get off the ground on January 1, 2000, unless FAA is substantially Year 2000 compliant—and that would be an economic disaster. Because of these types of concerns, FAA has now agreed to meet regularly with industry representatives to coordinate the safety and technical implications of shared data and interfaces. An agencywide inventory and assessment of information systems and their components provides the necessary foundation for detailed Year 2000 program planning. A thorough inventory ensures that all systems are identified and linked to a specific business area or process, and that all agencywide, crosscutting systems are considered while detailed assessments determine (1) the criticality of the various systems and (2) how they should be handled (through conversion, replacement, retirement, or no remedial action). According to FAA’s April 1997 Year 2000 guidance document, it expected to have completed inventories of its computer systems and components by May 31, 1997. However, FAA still had not finished them when we completed audit work in November 1997. This inventory did not contain all systems, support software, firmware, telecommunications equipment, and desktop computers. According to FAA’s November 15, 1997, quarterly Year 2000 report to DOT, its inventory included 619 systems. These systems comprise approximately 18,000 subsystems and 65 million lines of software code. In commenting on a draft of this report, a Year 2000 program official told us that the inventory of systems was completed on December 29, 1997, with other inventory items expected to be completed later. In addition, FAA has not completed assessing (1) the criticality of the computer systems in its inventory or (2) how the systems should be handled. Of the 619 systems in its inventory when it reported to DOT on November 15, FAA identified 329 as mission-critical, 278 as nonmission-critical, and 12 as undetermined, meaning that they have not yet been categorized as mission-critical or nonmission-critical. These numbers will likely continue to grow as the inventory nears completion. In mid-November, FAA provided data to DOT on the number of mission critical systems it had assessed and whether it had determined that they should be replaced, retired, left alone, or converted to Year 2000 compliancy. DOT requested validation of these assessments and refined these numbers for its November quarterly report to the Office of Management and Budget (OMB). In that report, DOT stated that FAA had completed assessments of only 84, or about 25 percent, of its 329 mission-critical systems. That is, FAA had not determined how to handle its remaining 245 mission critical systems. Of the 84 completed assessments, DOT reported that 34 are Year 2000 compliant, 8 are to be replaced with new compliant applications, 2 are being retired, and 32 are being repaired. The remaining 8 systems are in the process of being certified compliant. Figure 2 summarizes the completed assessments’ results. FAA reported that assessment of the remaining mission-critical systems will continue through the end of January 1998. Other key assessment phase activities include determining priorities for system conversion and replacement, developing plans for addressing identified date dependencies, developing validation and test plans for all converted or replaced systems, addressing interface and data exchange issues among systems, and developing contingency plans for continuing operations in case systems are not corrected in time. FAA has just begun these activities. In October 1997, FAA established a Year 2000 Financial Oversight Team with responsibilities for determining priorities for system conversion and replacement, and recommending sources for funding Year 2000 activities to FAA management. At the conclusion of our audit work, Year 2000 program officials told us that they provided preliminary recommendations to FAA management in December 1997, with final recommendations to follow in February 1998. Also, FAA’s draft Year 2000 plan calls for each of the 13 lines of business to (1) develop plans for addressing identified date dependencies, (2) develop plans for validating and testing all converted or replaced systems, (3) address interface and data exchange issues among systems, and (4) develop a realistic contingency plan, including establishing manual or contract procedures, to ensure the continuity of core processes. To date, most of the lines of business have developed plans for addressing identified date dependencies. Some of these plans include requirements for testing converted or replaced systems, addressing interface and date exchange issues, and developing contingency plans; other plans, however, do not address these items. Regardless, not all of these plans have been finalized. The program manager told us that she was working with the responsible organizations and planned to finalize their plans by the end of December 1997. At the conclusion of our review, these plans had still not been finalized. Renovation, validation, and implementation activities are the three critical final phases in correcting Year 2000 vulnerabilities. FAA has started the renovation process for some of the systems with completed assessments. However, because of the agency’s delays in completing its awareness and assessment activities, time is running out for FAA to renovate its systems, validate these conversions or replacements, and implement its converted or replaced systems. FAA’s delays are further magnified by the agency’s poor history in delivering promised system capabilities on time and within budget. FAA’s weaknesses in managing software acquisition will also hamper its renovation, validation, and implementation efforts. Given the many hurdles FAA faces and the limited amount of time left, planning for business continuity becomes ever more urgent for FAA so that its mission-critical business processes and supporting systems continue to function through the millennium. Such business continuity planning defines the assumptions and risk scenarios, business service objectives, time frames, priorities, tasks, activities, procedures, resources, responsibilities, and the specific steps and detailed actions for re-establishing functional capability for mission critical business processes in the event of prolonged disruption, failure, or disaster. Reliable program cost estimates require a thorough and complete definition of a program’s scope and components. However, FAA has yet to completely define the scope of its Year 2000 program. As noted above, FAA has not yet completed its inventory of systems or its assessment of which systems are critical and how to handle them. As a result, the current Year 2000 program cost estimate of $246 million will likely change once FAA has a better handle on its inventory and determines how to handle the various systems. FAA acknowledges the uncertainty of its current cost estimate due to incomplete inventory and assessment information. Even after assessments are completed and estimates finalized, FAA’s cost estimates could still be of questionable reliability. We have previously reported on FAA’s weaknesses in reliably estimating the cost of its software-intensive Air Traffic Control projects and recommended that FAA correct its weak cost estimating practices by institutionalizing defined estimating processes. FAA agreed with our recommendation and has initiated efforts to improve its processes. Regardless of the eventual cost estimate, uncertainty surrounds funding of FAA’s Year 2000 activities. For example, only $18 million of the $246 million is currently in the fiscal year 1998 budget. At the same time, however, OMB has stated that, because of DOT’s disappointing progress on the Year 2000 issue, it has established a rebuttable presumption that it will not fund any DOT requests for information technology investments in the fiscal year 1999 budget unless they are directly related to fixing the Year 2000 problem. Further, according to a December 4, 1997, briefing to the Administrator, FAA has been directed by OMB to reprogram existing funds from lower priority projects to its Year 2000 program. Should the pace at which FAA addresses its Year 2000 issues not quicken, and critical FAA systems not be Year 2000 compliant and therefore not be ready for reliable operation on January 1 of that year, the agency’s capability in several essential areas—including the monitoring and controlling of air traffic—could be severely compromised. This could result in the temporary grounding of flights until safe aircraft control can be assured. Avoiding such emergency measures will require strong, active oversight. Yet an approved strategy containing detailed plans, milestones, and valid cost estimates—all vital considerations—is still lacking. This is due, at least in part, to incomplete assessment of agency vulnerabilities. Such incomplete assessment is cause for concern. It means that FAA has no way of knowing at this time how serious its Year 2000 date software-coding problem is—or what it will cost to address it. FAA’s delays to date are very troubling. Given the rapidly approaching millennium, such delays are no longer acceptable. Until all inventorying and assessments have been completed—set for the end of January 1998—FAA will not be able to effectively or efficiently marshal the available resources, both human and financial, that will be needed to do the job. Once the degree of vulnerability has been determined, a structured approach—such as that provided in our assessment guide—can offer a road map as to the effective use of such resources. Unless critical renovation, validation, and implementation activities are completed in time, and sound contingency plans are available, FAA risks not successfully navigating the change to the new millennium. Urgent action is imperative to improve the management effectiveness of FAA’s Year 2000 program and thus the likelihood of its success. Accordingly, we recommend that the Secretary of Transportation direct that the Administrator of FAA, take the actions necessary to expedite the completion of overdue awareness and assessment activities. At a minimum, the Administrator should finalize an agencywide plan which provides the Year 2000 program manager the authority to enforce Year 2000 policies and outlines FAA’s strategy for addressing the Year 2000 date change; assess how its major business lines and the aviation industry would be affected if the Year 2000 problem were not corrected in time, and use the results of this assessment to help rank the agency’s Year 2000 activities, as well as a means of obtaining and publicizing management commitment and support for necessary Year 2000 initiatives; by January 30, 1998, complete inventories of all information systems and their components, including data interfaces; by January 30, 1998, finish assessments of all systems in FAA’s inventory to determine each one’s criticality and to decide whether each system should be converted, replaced, or retired; determine priorities for system conversion and replacement based on establish plans for addressing identified date dependencies; develop validation and test plans for all converted or replaced systems; craft Year 2000 contingency plans for all business lines to ensure continuity of critical operations; and make a reliable cost estimate based on a comprehensive inventory and completed assessments of the various systems’ criticality and handling needs. DOT and FAA officials provided oral comments on a draft of this report. These officials generally concurred with the report’s findings, conclusions, and recommendations. FAA’s Chief Information Officer (CIO) stated that FAA recognizes the importance of addressing the Year 2000 issue and plans to implement our recommendations. The CIO stated that FAA’s administrator had not yet signed the agencywide Year 2000 plan and that its Year 2000 program manager retired at the end of December 1997. FAA plans to hire a new acting program manager from outside the agency. Given the limited amount of time left to address Year 2000 issues, delays in finalizing the agencywide plan and the turnover of senior management further risk FAA’s chance of success. Representatives from the Air Traffic Services (ATS) line of business, the organization responsible for operational air traffic control systems, commented that their organization has made significant progress in addressing the Year 2000 problem and that they do not have the problems that FAA has overall. ATS officials stated that they have in place a Year 2000 project plan, repair process and standards, and a quality assurance plan for system renovations. While we acknowledge these steps by ATS, the pace of progress must increase if ATS and FAA are to address the Year 2000 problem in time. FAA officials also offered some specific comments directed to particular language in the draft report. These comments have been incorporated into the report where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, and their Subcommittees on Transportation; the Subcommittee on Aviation, Senate Committee on Commerce, Science and Transportation; and the Subcommittee on Aviation, House Committee on Transportation and Infrastructure. We are also sending copies to the Secretary of Transportation, the Administrator of the Federal Aviation Administration, and other interested congressional committees and subcommittees. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at willemssenj.aimd@gao.gov if you have any questions concerning this report. Major contributors to the report are listed in appendix I. Glenda C. Wright, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the effectiveness of the Federal Aviation Administration's (FAA) year 2000 program, including the reliability of its year 2000 cost estimate. GAO noted that: (1) FAA's progress in making its systems ready for the year 2000 has been too slow; (2) at its current pace, it will not make it in time; (3) the agency has been severely behind schedule in completing basic awareness activities, a critical first phase in an effective year 2000 program; (4) for example, FAA appointed its initial program manager with responsibility for the year 2000 only 6 months ago, and its overall year 2000 strategy is not yet final; (5) FAA also does not know the extent of its year 2000 problem because it has not completed most key assessment phase activities, the second critical phase in an effective year 2000 program; (6) it has yet to analyze the impact of systems' not being year 2000 date compliant, inventory and assess all of its systems for date dependencies, develop plans for addressing identified date dependencies, or develop plans for continuing operations in case systems are not corrected in time; (7) FAA currently estimates it will complete its assessment activities by the end of January 1998; (8) until these activities are completed, FAA cannot know the extent to which it can trust its systems to operate safely after 1999; (9) the potential serious consequences include degraded safety, grounded or delayed flights, increased airline costs, and customer inconvenience; (10) delays in completing awareness and assessment activities also leave FAA little time for critical renovation, validation, and implementation activities--the final three phases in an effective year 2000 program; (11) with 2 years left, FAA is quickly running out of time, making contingency planning for continuity of operations even more critical; (12) FAA's inventory and assessment actions will define the scope and magnitude of its year 2000 problem; since they are incomplete, FAA lacks the information it needs to develop reliable year 2000 cost estimates; and (13) FAA's year 2000 project manager currently estimates that the entire program will cost $246 million based on early estimates from managers throughout the agency.
Since NAFTA’s implementation, trade between the United States and Mexico has more than doubled, growing from $100 billion in 1994 to $248 billion in 2000. Enhanced trade has increased the number of northbound truck crossings from 2.7 million in fiscal year 1994 to more than 4.3 million in fiscal year 2001. According to DOT, about 80,000 trucks crossed the border in fiscal year 2000, 63,000 of which were estimated to be of Mexican origin. Trucks from Mexico enter the United States at border crossing points in four U.S. states (see fig. 1), but most of the crossings occurred at five ports of entry in fiscal year 2001: Laredo, El Paso, Hidalgo/Pharr in Texas, and Calexico and Otay Mesa in California. Commercial truck traffic at Texas and California ports of entry, which handle approximately 91 percent of truck crossings from Mexico, has grown just over 60 percent since NAFTA went into effect. Table 1 lists the principal commercial ports of entry and the number of truck crossings that occurred at each port in fiscal year 2001. Under NAFTA, barriers have gradually been reduced for trade in goods and services among Canada, Mexico, and the United States. Among other things, NAFTA allows Mexican commercial vehicles greater access to U.S. highways to facilitate trade between the two countries. Under NAFTA’s original timeline, Mexico and the United States agreed to permit commercial trucks to operate within both countries’ border states no later than December 18, 1995, and beyond the border states by January 1, 2000. However, due to U.S. concerns about the safety of Mexican trucks and the adequacy of Mexico’s truck safety regulatory system, the United States postponed implementation of NAFTA’s cross-border trucking provisions and only permitted Mexican trucks to continue to operate in designated commercial zones within Arizona, California, New Mexico, and Texas. DOT’s Office of Inspector General and GAO have reported that out-of- service rates for Mexican trucks operating in the commercial zones exceeded those of U.S. trucks in the nation as a whole. The Inspector General has also reported that the percentage of Mexican trucks placed out-of-service in the commercial zones declined from 44 percent in fiscal year 1997 to 36 percent in fiscal year 2000. In 1998, Mexico challenged the United States’ delay in implementing NAFTA’s schedule for cross-border trucking. In February 2001, a NAFTA arbitration panel ruled that the United States’ blanket refusal to review and consider Mexican motor carrier applications for operating authority to provide cross-border trucking services beyond the commercial zones violated its NAFTA obligations. The panel indicated that under NAFTA, the United States is permitted to establish its own safety standards and ensure that Mexican trucking firms and drivers comply with U.S. safety and operating regulations. However, the panel also noted that due to differing regulatory regimes in each country, the United States need not treat Mexican carriers or drivers exactly the same as those from the United States or Canada, provided that such different treatment is imposed in good faith with respect to a legitimate safety concern and conforms with relevant NAFTA provisions. In February 2001, the administration announced that it would comply with its NAFTA obligations and allow Mexican commercial carriers to operate beyond the commercial zones by January 2002. In May 2001, DOT issued three proposed rules that would revise existing regulations and application forms and establish a two-tiered application process for Mexican carriers seeking authority to operate within and beyond the commercial zones.Under the proposed rules, a carrier’s authority would be conditioned on satisfactory completion of a safety audit within 18 months of receiving conditional operating authority. According to the Federal Motor Carrier Safety Administration (FMCSA), the agency primarily responsible for enforcing U.S. truck safety regulations, the final regulations would ensure that FMCSA receives adequate information to assess an applicant’s safety program and its ability to comply with U.S. safety standards before it is authorized to operate in the United States. As of December 2001, DOT had not finalized these rules. DOT officials said these rules were not finalized because the Department was waiting for the outcome of the congressional appropriations process. Additional statutory requirements that must be met before Mexican commercial trucks can travel beyond the commercial zones are contained in the fiscal year 2002 DOT appropriations act. These include a range of inspection requirements and facility enhancements, such as adding weigh-in-motion scales at the 10 highest volume crossings. Additional requirements are discussed later in this report. U.S. border state and Mexican transportation officials and representatives of U.S. and Mexican trucking organizations we interviewed said they believe few Mexican carriers will initially apply for authority to travel beyond the commercial zones. Further, they suggested that any increase in truck traffic would be gradual. As of October 2001, fewer than 200 Mexican trucking companies had applied to DOT to operate in the United States beyond the border commercial zones. Regulatory and economic factors may affect Mexican trucking companies’ interest and ability to operate vehicles beyond the commercial zones in the short run, but long- term trends in cross-border trade operations could increase interest in operating beyond the commercial zones. A number of regulatory and economic factors may limit the number of Mexican carriers operating beyond the commercial zones in the near term. For example, U.S. and Mexican officials identified the lack of established business relationships in the United States as a factor likely to reduce the number of Mexican trucking companies willing to operate beyond the commercial zones. According to U.S. and Mexican officials, many Mexican trucking companies lack distribution ties outside the commercial zones and thus do not have immediate access to “backhaul” cargo from the United States to Mexico that would allow them to operate profitably. The officials noted that it would take time to develop these business relationships. The cost and availability of insurance may also affect the number of Mexican carriers operating beyond the commercial zones. According to the National Association of Independent Insurers, newly established trucking companies and Mexican trucking companies wanting to operate beyond the commercial zones face a competitive disadvantage in obtaining affordable insurance. According to companies currently providing insurance to Mexican trucking companies and an insurance industry representative, premiums for Mexican trucking companies will initially be set at the highest level and gradually decline as the market matures. These individuals stated that it would take time for the insurance industry to become familiar with the financial and safety records of Mexican companies and drivers and to develop effective means to access information for underwriting purposes. Further, large U.S.-based and multinational insurers are likely to gradually enter the Mexican market as demand for insurance in the Mexican trucking industry increases. The number of U.S.-based firms currently providing insurance coverage for Mexican trucks entering the United States is unknown. According to insurance company officials, fewer than 10 U.S. firms may be providing insurance for daily trips into the United States by Mexican trucking companies. Also, according to Mexican private industry representatives and U.S. researchers, congestion and delays in crossing the U.S.-Mexico border result in added operating costs for Mexican carriers. These costs make it less profitable to use newer, more expensive vehicles to wait in lines at the border. Mexican government and private sector officials stated that delays in crossing the border have increased since the terrorist attacks on September 11, 2001. These delays could limit long-haul operations and encourage further reliance on the existing cross-border shuttle (drayage) system. Like other carriers, Mexican carriers must pay registration fees to each state in which they operate in the United States. The International Registration Plan was created to facilitate the payment and reduce the cost of these fees by allowing one member state to collect registration fees and distribute them to other jurisdictions as necessary. However, Mexico is not a member of the plan so Mexican companies must instead purchase individual trip permits for each state in which they travel. According to an International Registration Plan representative, since these individual trip permits cost more in the aggregate, Mexican carriers could be at a competitive disadvantage. For example, a Mexican truck traveling from Nuevo Laredo, Mexico, to Tulsa, Oklahoma, must purchase trip permits before traveling through Texas and Oklahoma. Table 2 depicts the costs for an International Registration Plan member and a non-member traveling through these states once a week for a year. As seen in table 2, a non- member truck would pay about $5,600 more annually than a member truck. According to an American Trucking Associations official, a medium-term solution for Mexican trucking companies to take advantage of the cost savings and convenience associated with the International Registration Plan would be to register in a state that is a plan member. A Mexican trucking company may participate in the International Registration Plan by selecting a member state and establishing a business presence such as a sales or service office in that state. However, establishing such a presence may entail additional costs such as federal and state taxes. When we discussed these registration fees and potential taxes with Mexican public and private officials in October 2001, they were unaware that they needed to pay them. In commenting on a draft of this report, DOT officials said they have discussed these registration fees with Mexican government officials. The small scale and size of Mexican trucking operations could also limit travel beyond the commercial zones. Mexico’s truck fleet is relatively small compared with that of the United States, and Mexican trucking association representatives said that their members’ fleets have fewer trucks than their U.S. counterparts. For example, there are nearly 600,000 trucking companies with approximately 6.3 million tractors and trailers in the United States, according to DOT. Mexico, in contrast, in 2000 had approximately 83,000 federally registered commercial cargo carriers with approximately 277,000 tractors and trailers (trucks may also be registered by Mexican states if they do not drive on federal highways). Further, the overall age of the Mexican commercial vehicle fleet may also limit the number of Mexican carriers able to operate beyond the commercial zones. According to Mexican registration data, in 2000 only 20 percent of the commercial cargo trucks registered for use on Mexican federal highways were manufactured after 1994. Mexican industry officials told us that trucks manufactured in Mexico prior to this date were not built to U.S. safety and emissions standards. Mexican carriers can apply to have older vehicles certified to be in compliance with U.S. safety standards. However, Mexican industry officials told us that these vehicles might have difficulties meeting U.S. emissions standards. Uncertainty about DOT’s final rules for obtaining operating authority has reduced the number of Mexican carriers that will initially apply for authority to operate beyond the commercial zones, according to Mexican government and private sector representatives. According to these officials, this uncertainty makes it difficult to plan for the future since union contracts allowing travel beyond the commercial zones and distribution ties must be established in advance. Cross-border trucking beyond the commercial zones may increase as firms seek to eliminate inefficiencies associated with the current system of drayage operations. Restrictions on cross-border commercial vehicle traffic have led to a transport system that typically requires three tractors and/or trailers to carry goods from the interior of Mexico to the U.S. interior. For example, a long-haul vehicle is used to bring cargo to the Mexican border from an interior Mexican state, where it is transferred to a short-haul drayage truck that moves the goods across the U.S. border into the commercial zones. To carry a shipment beyond the commercial zones, it must be transferred to a third vehicle domiciled in the United States. This system is cumbersome and inefficient, according to the Office of the U.S. Trade Representative, trucking industry representatives, businesses, and academic researchers. For example, as we reported previously, nearly half of the containers crossing the border from Mexico into the United States in 1998 were empty because they left products or raw materials in Mexico—yet still had to be processed by U.S. Customs. According to U.S. industry representatives and researchers, the time required to complete transfers within the border commercial zones hinders the “just-in-time” nature of many assembly plants (maquiladoras) and agricultural industries, and can result in additional costs. They note that a single-truck transport system would be more efficient, practical, and less costly. In addition, government officials who monitor hazardous materials shipments contend that minimizing transfers and the handling of these loads would decrease the risk of dangerous accidents and spills. According to researchers and Mexican government officials, technological and other innovations, such as an automated clearance system requiring carriers to provide documentation electronically, would also encourage the development of cross-border trucking beyond the commercial zones by reducing the need for time-consuming paperwork reviews at the border. According to Mexican customs officials, new programs, such as the U.S. Customs Service’s Business Anti-Smuggling Coalition, could encourage the growth of such cross-border trucking by reducing the time spent waiting in lines at the border. DOT faces a number of challenges in implementing a coordinated truck safety system—including acquiring adequate infrastructure and deploying personnel—at the U.S.-Mexico border. Few permanent facilities are in place for truck safety inspections and DOT only began taking steps to secure its own space for these inspections in August 2001. It also has not fully integrated its inspection personnel and their activities with those of the border states. With regard to emissions inspections, the Environmental Protection Agency (EPA) relies on state governments to establish and apply their own enforcement procedures. These operational challenges must be reconciled with a number of new requirements contained in the fiscal year 2002 DOT appropriations act. Although we reported in 1997 and 2000 that FMCSA needs to be more proactive in securing inspection facilities at planned or existing border installations, the agency only began taking steps to secure its own space in August 2001 and has been occupying temporary space provided by Customs without the benefit of interagency agreements. Currently, only 2 of 25 commercial ports of entry have permanent inspection facilities— both are state facilities in California. Other state facilities are being constructed or planned in the other three border states. However, federal and state officials have not formally agreed on how federal and state facilities will complement each other. Permanent truck inspection facilities allow for more rigorous inspections, provide scales and measuring devices to screen trucks for weight and size, protect inspectors from the extreme heat prevalent at the border, and signal a commitment to enforce truck safety standards. At the three states without permanent facilities—Texas, Arizona, and New Mexico—Customs typically allows state and federal truck safety inspections on the agency’s property on a temporary basis; however, if capacity is reached for storing trucks placed out-of-service, inspectors are unable to conduct additional safety inspections (app. II describes the amount of space designated for truck inspection activities at southwest border ports of entry). For example, the Laredo, Texas, ports of entry handle the greatest number of northbound trucks, accounting for approximately 33 percent of all northbound commercial traffic. In Laredo, Customs has designated space for 33 trucks to be inspected or placed out-of-service, yet according to the U.S. Customs port director in Laredo, approximately 5,500 to 6,100 trucks cross at the two Laredo ports on an average day. As fig. 2 shows, spaces used by federal truck safety inspectors at the World Trade Bridge in Laredo are not covered, nor is there lighting available for inspectors to conduct safety inspections at night. In light of the limited amount of temporary space for truck inspection activities, FMCSA has recently begun to take steps to acquire its own space in anticipation of increasing border enforcement personnel. FMCSA submitted its space needs for border port of entry facilities to the General Services Administration (GSA) in August 2001 in an attempt to secure space at federal ports of entry. However, it is not clear when or if inspection space at these facilities can be acquired. According to a GSA official, GSA and Customs must first conduct site surveys to determine the amount of vacant space available at port of entry facilities for truck inspections. As a result of heightened security in response to the September 11, 2001, terrorist attacks on the United States, Customs is reassessing its space needs at these facilities, with important implications for truck inspection activities. In discussions among FMCSA, GSA, and Customs held in October 2001, Customs said it will no longer allow trucks placed out-of-service for safety violations to remain on Customs compounds due to safety concerns related to allowing mechanics and tow truck operators on the compound. Instead, federal and state inspectors must escort these vehicles off the facility. For example, in Texas a tow truck meets out-of-service vehicles at the Customs gate and tows them off the compound. A FMCSA official in Texas said these vehicles are rarely towed to Mexico unless they are empty. It is unclear what effect this development will have on the number and type of truck inspections that can be conducted in both the near and long term at federal ports of entry. As noted, only 2 of the 25 commercial ports of entry have permanent truck inspection facilities—both state-operated facilities located in California. As permanent facilities dedicated to truck safety inspections, they have space to perform inspections and to place vehicles out-of-service (see figs. 3 and 4). The three border states without permanent truck inspection facilities at border ports of entry—Texas, Arizona, and New Mexico—are planning to build facilities at some crossings. To construct truck safety inspection facilities, DOT officials said they plan to make the following allocations based on the fiscal year 2002 DOT appropriations act: $12 million for Texas, $54 million to be divided among the four border states, and $2.3 million for federal facility improvements. Texas plans to build eight permanent truck safety inspection facilities that would be adjacent to the Customs ports of entry. The facilities would be similar in function to California’s truck inspection facilities. City officials in Laredo and El Paso, however, object to the facilities being so close to the border, arguing that these facilities would interfere with the flow of commerce. Local opposition to placing truck inspection facilities at the border and constraints on state funding have impeded progress. State officials estimate that the permanent facilities will not be completed until 2004. In the interim, Texas has established one temporary truck inspection site in El Paso directly adjacent to a federal port of entry facility and began inspecting trucks there in July 2001. Texas also plans to establish four other temporary truck inspection sites directly adjacent to port of entry facilities in Laredo, Eagle Pass, Pharr, and Brownsville. The state plans to lease or purchase 5 acres of land for each of these temporary sites and provide a trailer for office space. As of November 2001, state officials had not implemented plans for the four temporary truck inspection sites. Arizona and New Mexico have each begun work on a permanent truck inspection facility. In 1998, Arizona acquired a 10-acre lot adjacent to Customs’ port of entry in Nogales on which to construct a permanent truck inspection facility. According to Arizona officials, this project is scheduled for completion in 2002. New Mexico has also started construction of a truck inspection facility in Santa Teresa. According to New Mexico officials, funding is currently available only for the groundwork. Further construction will not be scheduled until funding is available to complete the facility. According to DOT officials, the fiscal year 2002 DOT appropriations act provides funding to hire and train additional federal and state safety inspection personnel. However, federal and state officials have not yet agreed on the level of staffing needed at temporary and permanent truck inspection facilities to achieve safety goals. For example, in Texas, there are no formal agreements between the state and FMCSA about coordinating inspection responsibilities at the ports of entry, or agreements establishing the number of federal and state inspection personnel at the proposed temporary and permanent sites. As of October 2001, there were 58 federal officials inspecting trucks on the southwest border. FMCSA officials said that $9.9 million in fiscal year 2002 funding would permit them to increase the number of enforcement personnel at ports of entry to 141. In addition, FMCSA will also use these funds to hire 134 staff who will perform safety audits and conduct compliance reviews of Mexican motor carriers seeking authority to operate beyond the commercial zones. The appropriations act requires that 50 percent of these safety audits and compliance reviews be conducted “on-site.” Mexican officials stated that they would only allow these reviews within their country in the presence of a Mexican inspector. As of October 2001, the 4 border states had assigned 89 inspectors to border crossings to inspect trucks entering the United States from Mexico—43 in Texas, 41 in California, 3 in Arizona, and 2 in New Mexico.The fiscal year 2002 DOT appropriations act also provided $18 million for the border states to hire truck safety inspectors. Prior to passage of the act, Arizona planned to add a total of 11 inspectors and New Mexico planned to add a total of 9 inspectors in 2002 and 2003. Texas did not plan to increase the number of its inspectors until federal and state funds were committed to build inspection facilities. California was unsure how budgetary considerations would change its staffing levels. Under the 1990 Clean Air Act, EPA is required to establish minimum national standards for air pollution and individual states are assigned primary responsibility to ensure compliance with the standards through state implementation plans. Such plans can include truck emissions inspections. Since 1994, EPA’s primary role in regulating commercial truck emissions has been to certify compliance of commercial truck engines at the factories where they are manufactured. EPA relies on the commercial truck engine manufacturers to certify that their products meet air emissions standards and conducts spot checks at engine factories. Some U.S. states have implemented emissions testing requirements for heavy-duty diesel trucks as part of their efforts to meet EPA air quality standards for non-attainment areas. State testing programs differ significantly, with some states requiring yearly checks of trucks and others operating both annual and more frequent roadside inspection programs. California, which has a large number of areas that do not meet federal air quality standards, including the state’s two southern border counties, conducts emissions tests at the border. Since 1999, California has assigned two inspectors each to the ports of entry at Calexico and Otay Mesa to monitor the emissions of U.S. and Mexican heavy-duty vehicles. According to California state officials, in 2000, the failure rate for U.S. trucks was approximately 8 percent, while the failure rate for Mexican trucks was 12 percent. Arizona also operates an emissions testing program for commercial trucks, but testing is conducted on a yearly basis for trucks registered in the state’s two non-attainment areas, Phoenix and Tucson—neither of which are located at the border. Neither Texas nor New Mexico performs emissions inspections at the border. The fiscal year 2002 DOT appropriations act provides increased funding for activities related to the safety of Mexican carriers and sets forth a number of new requirements that DOT must meet before Mexican motor carriers can be granted authority to operate beyond the commercial zones. Meeting these requirements could entail significant operational and facility planning by DOT in coordination with the border states and other federal agencies. DOT officials said in December 2001 they are unsure when they will be able to meet the requirements and fully open the border given the short time these requirements have been in place. Among other things, DOT must equip all U.S.-Mexico commercial border crossings with scales suitable for enforcement action. Five of the 10 highest volume crossings must have weigh-in-motion scales, and the remaining 5 highest volume crossings must have such scales within 12 months; require federal and state inspectors to electronically verify the status and validity of the license of each Mexican commercial driver transporting certain quantities of hazardous materials, drivers undergoing specified inspections, and at least 50 percent of other Mexican commercial drivers; require Mexican commercial trucks to cross into the United States only where there is a safety inspector on duty and adequate capacity exists to conduct a sufficient number of meaningful safety inspections and accommodate out-of-service trucks; and require Level I inspections and Commercial Vehicle Safety Alliance (CVSA) decals for all Mexican commercial vehicles that wish to operate beyond the commercial zones but do not display such decals. According to FMCSA’s Associate Administrator for Enforcement and Program Delivery, FMCSA plans to measure the progress of Mexican carriers in complying with U.S. safety standards by using truck out-of- service rates, traffic fatality rates, and accident rates. FMCSA’s goal will be for Mexican carriers’ rates to be comparable to those for U.S.-domiciled carriers. Currently, available data do not permit differentiating between drayage (cross-border shuttle) and long-haul carriers operating at the border. Differentiating between these two classes of vehicles in terms of calculating out-of-service rates will be important in determining the extent to which the safety goals are being met. The Mexican government has developed truck safety regulations and reports taking steps to enforce safety and air emissions standards but these efforts are relatively recent and it is too early to assess their effectiveness. With support from DOT, it has also developed key databases related to commercial vehicle safety and it has participated in trinational efforts to make U.S., Canadian, and Mexican land transportation standards more compatible. Some Mexican private sector and industry groups have also made efforts to improve the safety of Mexican commercial vehicles by implementing safety programs and purchasing new vehicles. Mexico has developed new regulations establishing specifications for vehicle safety equipment, transportation of hazardous goods, vehicle inspection standards, and maximum limits for emissions of certain chemicals. According to Mexican officials, prior to 1992, Mexico had few vehicle manufacturing and operating safety standards, and those that did exist were very general. Since 1992, Mexico has developed and implemented specific federal regulations dealing with commercial vehicle safety. These include regulations establishing specifications for buses, license plates, vehicle weights, and dimensions. Mexico has also created operating safety standards, including speed limits for commercial motor vehicles. According to DOT, Mexico is considering implementing additional vehicle manufacturing standards, which could be modeled after U.S. or European standards. In addition, Mexico has developed and implemented standards related to the transportation of hazardous goods. These standards address labeling, classifying, inspecting, documenting, storing, and shipping hazardous goods. According to DOT and Mexican officials, the standards are based on the United Nations Recommendations on the Transport of Dangerous Goods. In July 2000, Mexico finalized its first regulation establishing the criteria and authority for roadside commercial vehicle inspections. According to CVSA and Mexican officials, this regulation is modeled after the CVSA inspection procedures and out-of-service criteria. The regulation establishes the procedures used by federal officials for inspecting commercial vehicles and placing them out-of-service. It also establishes a time frame for inspecting these vehicles, ranging from 20 minutes for buses and commercial vehicles carrying hazardous materials to 30 minutes for commercial vehicles carrying general cargo. According to Mexican officials, prior to July 2001 when the regulation was fully implemented, there were no rules for placing commercial vehicles out-of-service and only the most serious violations would have resulted in putting a vehicle out-of-service. Mexico has also developed and implemented standards limiting commercial vehicle emissions. These standards establish limits for air emissions of hydrocarbons, carbon monoxide, nitrous oxide, and vehicle smoke from new diesel engines. They also establish limits for vehicle smoke for diesel engines in use, as well as a program for inspecting diesel emissions. According to Mexican officials, commercial vehicles are subject to emissions inspections every 6 months. Mexico’s commercial vehicle inspections are performed by 350 inspectors from the Secretariat for Communication and Transportation—the agency primarily responsible for inspecting commercial vehicles traveling on federal highways. In addition, 5,000 inspectors from the Federal Preventive Police have been trained to conduct inspections. Many of these inspectors were trained by U.S. border state inspectors. During 2000, Mexican inspectors performed a total of 114,138 roadside vehicle inspections and found 12,929 vehicles in violation of safety standards. In 1999, they conducted 88,490 roadside vehicle inspections and found 5,367 vehicles in violation of safety standards. Mexican federal inspectors also performed compliance reviews of motor carriers at their place of business, conducting 2,441 compliance reviews in 2000 and 1,003 in 1999. While it is encouraging that the Mexican government is making efforts to inspect more commercial trucks, we have no information on the nature of the violations found or whether any sanctions or penalties may have been assessed for them. Further, as noted above, inspections conducted in 1999 and 2000 were not covered under Mexico’s recently implemented (July 2001) commercial vehicle inspection regulations. According to the Secretariat of Communication and Transportation, Mexico plans to increase the percentage of commercial vehicles inspected each year, from 28 percent of the total fleet in 2000 to 50 percent in 2006. The 2001 program set the following minimum inspection activities and inspection-level goals: increase the total number of roadside inspections by 27 percent and the total number of carrier compliance reviews by 5 percent over 2000 levels; maintain a permanent enforcement presence in each of 10 main conduct 90 roadside inspections and 9 compliance reviews per year per inspector. In June 2000, Mexico participated in the CVSA-sponsored “Roadcheck 2000” program, a trinational exercise carried out over a 3-day period with the United States and Canada. During this exercise, Mexican officials inspected a total of 1,428 Mexican commercial vehicles along federal highways, putting 246, or about 17 percent, out-of-service. However, as of October 2001, Mexico was not issuing CVSA decals. Mexican officials told us they were not issuing CVSA decals because the decals are not required by Mexican law. According to Mexican officials, Mexico is in the process of constructing 7 permanent truck inspection facilities similar to stations in California, with an additional 13 planned. All seven facilities under construction are to be completed by the end of 2001, with an additional six facilities scheduled for completion in 2002 and the remaining seven scheduled for completion in 2003. According to Mexican officials, three of these facilities—Mexicali, Matamoros, and Nuevo Laredo— are being constructed on highways leading to the border. The purpose of these stations, in part, is to inspect and weigh vehicles and thus reduce the number of accidents caused by overweight and unsafe commercial vehicles. According to Mexican officials, the stations will include weigh-in-motion scales and areas to inspect vehicles and remove noncompliant vehicles from circulation. Mexican officials stated that they conducted a study to determine the factors causing accidents involving commercial vehicles. The study found that more than 80 percent of all accidents were caused by driver errors. To reduce the number of accidents, Mexico is developing and implementing new training requirements that would require each new commercial driver to receive a minimum of 420 hours of driver training, 70 percent of which constitutes instruction on the road. Drivers renewing their licenses would have to undergo 40 hours of instruction. This expanded training requirement is expected to be fully in place by 2005. Commercial vehicle drivers responsible for hazardous materials would need to meet additional requirements. At present, drivers can obtain commercial driver’s licenses without such training. Since NAFTA was signed, the Mexican government, with the assistance of FMCSA and TML, a private contractor, has developed and is adding information to several databases. These databases include (1) carrier and vehicle authorizations, (2) commercial driver’s licenses, (3) accidents by Mexican commercial carriers and drivers, (4) results of inspections and audits, and (5) infractions. According to TML, it began working with Mexico to construct these databases in 1995. The databases are an integral piece of Mexico’s motor carrier safety information system. While important for Mexico’s internal purposes, they also provide information needed by U.S. law enforcement to verify driver and carrier information. Two of the five databases were available to U.S. law enforcement in 2000 and the remaining databases were to be available in 2001. The first database, the Carrier and Vehicle Authorization Information System, was completed in 1998 to assist the Mexican government in issuing carrier operating authority permits, vehicle license plates, and vehicle highway permits. According to TML, as of June 2001, the database included all Mexican commercial cargo carriers registered with the federal government. According to Mexican government statistics from 2000, there are approximately 83,000 commercial cargo carriers comprising approximately 8,000 corporations and 75,000 sole-proprietorships. These carriers maintained about 372,000 vehicles of all types. U.S. federal inspectors have been able to access this database since October 2000. According to private sector officials, an estimated 75,000 other commercial trucks are registered in Mexican states and are not in this database. Mexican federal officials said that border drayage vehicles also would not be in the federal database since they do not travel on federal highways and thus are not subject to inspection by federal inspectors. The second database, the Licencia Federal Information System, contains Mexican federal commercial driver’s licenses. It was completed in 1999 to maintain records on commercial drivers and includes driver identification, license status, and medical certifications. According to TML, this database went on-line in January 2000, and as of December 2000, all 46 of Mexico’s field offices issuing commercial driver’s licenses had complete access to it. As of October 2001, 70,150, or 23 percent, of an estimated 300,000 federal commercial driver’s licenses had been entered into the database. However, Mexican government officials say the database has information on 90 percent of the Mexican commercial drivers now crossing the border. Mexican government officials are entering records into this database as drivers renew their licenses and expect the database to contain all records by 2003. U.S. federal and state inspectors have been able to access this database since 2000. FMCSA policy requires that as of November 1, 2001, all Mexican commercial drivers entering the United States had to have a valid Mexican federal commercial driver’s license in the database. If these drivers do not have a valid Mexican federal commercial driver’s license in the database, FMCSA officials said they would be refused entry into the United States. The third database, the Accident Reporting Information System, was completed in 2000 and records all accidents on Mexico’s federal highways. It includes an accident overview; vehicle, driver, and passenger identification; insurance information; information on damages; and other data. According to TML, phased implementation and interface with the United States were slated for completion by August 2001 but have been delayed because of the change of administrations in Mexico. The fourth database, the Inspections and Audits Information System, was completed in 2000 to record the results of inspections and audits of motor carriers and their facilities. It includes inspection reports, as well as information on violations, infractions, and complaints. Mexican officials told us that these compliance reviews are conducted over a 15-day period. As of June 2001, there were 222 carrier audit records and 7,273 vehicle inspection records. We were unable to obtain information on what these inspections uncovered. The fifth database, the Infraction Information System, was completed in 2000 to process and report infractions committed by Mexican vehicles and drivers on federal highways. Phased implementation of this database began in 2001. According to TML, as of June 2001, there were about 6,000 interstate commerce vehicle infractions and about 7,000 intrastate and private vehicle infractions. Infractions on state or municipal roads are not included in this system. Since NAFTA was signed, Mexico has participated in trinational efforts to make U.S., Canadian, and Mexican land transportation standards more compatible. These efforts have included participation in NAFTA’s Land Transportation Standards Subcommittee (LTSS). In addition, Mexico has entered into bilateral agreements with the United States on specific commercial motor vehicle safety issues. According to an LTSS document, the subcommittee has made major accomplishments in the following areas: commercial driver’s licenses—agreement on a common age (21 years) for operating a vehicle in international commerce; language requirements—agreement on a common language requirement (the driver must be able to communicate in the language of the jurisdiction where the commercial vehicle is operating); drivers’ logbooks and hours-of-service—agreement on safety performance information each country will require from motor carriers; and driver medical standards—recognition of several binational agreements as the basis for recognizing driver medical standards. The LTSS reports that regulatory differences among the countries have made reaching compatibility in some areas difficult. For example, according to a DOT official the three NAFTA countries have not been able to reach agreement on commercial vehicle weight standards, maximum weight limits for truck axles, and dimensions (Mexico’s regulations focus on the total length of commercial vehicles while U.S. regulations focus on the length of the trailer). The United States and Mexico have also entered into binational agreements to ensure the compatibility of commercial vehicle safety standards. Among these are agreements on standards for drug and alcohol tests for drivers and acceptance of commercial driver’s licenses issued by the other country. For example, Mexican officials plan to obtain certification for a Mexican federal government laboratory to conduct drug and alcohol tests by 2002. DOT officials said the United States is continuing to work with Mexico on a variety of commercial vehicle safety issues including manufacturing standards and vehicle size and weight limitations. The Mexican private sector reports conducting activities designed to improve the safety of Mexican commercial vehicles. These efforts include conducting inspections to ensure that Mexican vehicles crossing the border meet U.S. safety standards; purchasing new commercial vehicles; and implementing safety rules that, according to Mexican private sector officials, exceed the Mexican government’s requirements. Moreover, according to representatives of Mexican private trucking associations, their members have adopted operating standards similar to those of large U.S. trucking companies. Mexican government officials stated that most trucks now used in border drayage operations would not meet their safety standards. Mexican government officials said that some Mexican trucking companies are purchasing new vehicles in anticipation of operating beyond the commercial zones. According to the Mexican government, the average age of federally registered truck tractors in Mexico is 16 years. In contrast, Mexico’s private trucking association, made up of companies that own and operate their own trucking fleets, said that its members’ vehicles are relatively new, averaging less than 5 years of age. According to association officials, these newer vehicles are the ones most likely to engage in cross- border trucking beyond the commercial zones. In Nuevo Laredo, Mexico, a local trucking association established an inspection station to ensure that vehicles belonging to association members meet U.S. standards. According to association officials, this facility is staffed by private maintenance personnel trained by the Texas Department of Public Safety, and inspections are provided free of charge to all member trucking companies. According to a FMCSA state director, this inspection facility, while not able to affix CVSA decals, represents a positive step toward assuring compliance with U.S. and Mexican safety standards. According to Mexican private industry officials, some Mexican trucking companies have implemented driver education and other operating safety requirements that go beyond the Mexican federal requirements. For example, officials of a Mexican private trucking association said that their members require extensive driver education and use computerized monitoring devices to track driver performance and compliance with company hours of service requirements. In the 7 years since NAFTA was implemented, the United States and Mexico have taken a number of steps toward achieving closer economic integration. However, despite a strong trading partnership and other ties, cross-border truck safety issues continue to be challenging. Mexico has taken important steps to enhance its regulatory capabilities, including developing key databases containing driver and carrier information and hiring and training inspection personnel. However, Mexico’s efforts to increase regulation of its motor carrier industry are relatively new; therefore, it is too early to assess their effectiveness. The U.S. border states and DOT have been increasing the number of safety inspectors inspecting trucks entering the country from Mexico, but it is unclear where additional inspectors will work and how they will share inspection responsibilities. California has built permanent truck safety inspection facilities at two ports of entry and Arizona has work under way to construct another one. At other major crossings, however, only makeshift facilities, at best, are available, and it will be several years until permanent facilities can be built in Texas. Although some progress has been made, there is continued uncertainty about the extent to which Mexican commercial trucks meet U.S. safety standards. While evidence indicates that limited numbers of Mexican carriers will initially operate beyond the commercial zones, additional work is needed if DOT is to reach its goals of having commercial trucks from Mexico meet U.S. safety standards and achieve similar safety performance results. Further, there is still no coordinated operational plan for how truck safety inspection activities will be conducted or agreements with border states on how best to implement them. There is also no clear agreement on the type and size of facilities that are needed, where they will be located, when they will be finished, or whether state and/or federal inspectors will work there. Such agreements and a coordinated operational plan will become increasingly important to develop and implement as DOT works to address statutory requirements and as cross- border trade grows. To ensure that Mexican trucks meet U.S. standards, we recommend that the Secretary of Transportation direct the Administrator of the Federal Motor Carrier Safety Administration to develop and implement a coordinated operational truck safety plan at the southwest border. In addition to meeting statutory requirements, this effort should include taking steps to improve the quality of data to evaluate whether safety goals are being met for both drayage (cross-border shuttle) and long-haul carriers; reaching agreements with states and other federal agencies on where inspection facilities will be built, how they will be staffed, and who will operate them; and developing a specific timetable for when these actions will be completed. We received written comments on a draft of this report from the Customs Service, which are reprinted in app. III. We obtained oral comments from DOT, including FMCSA’s Associate Administrator for Enforcement and Program Delivery and other officials; the Office of the U.S. Trade Representative, including the Deputy Assistant for Mexico and NAFTA; GSA, including the head of the Border Stations Center; and the Mexican embassy in Washington, D.C. We also provided copies to the Department of State, which did not provide comments, and EPA, which provided two technical comments. The Office of the U.S. Trade Representative, GSA, and the Customs Service generally agreed with our report’s findings and recommendation. DOT officials agreed with our recommendation that they develop a coordinated operational plan to inspect Mexican trucks at the border. However, they strongly emphasized that they were well advanced in their efforts to fulfill our recommendation as well as respond to the new truck safety requirements contained in the fiscal year 2002 DOT appropriations act. DOT officials stated that numerous actions critical to the border’s opening are underway or completed and that program implementation timelines and legislative implementation plans are being developed and will be issued shortly. FMCSA officials noted that they are completing detailed planning for hiring and allocating staff; securing new high technology equipment to assist them in accomplishing their mission; and that they have completed a system to track Mexican drivers’ U.S. traffic violation history. DOT officials noted that since the passage of DOT’s fiscal year 2002 appropriations act, high ranking Department officials will begin meeting immediately with border state officials to coordinate state activities and discuss actions needed to open the border. They noted that detailed work is underway with GSA and Customs to address infrastructure needs at each border port of entry and with the Mexican government to reach agreements on requirements included in the act. DOT officials stated that their past efforts and the efforts they intend to undertake in response to the act provide a comprehensive approach to ensure the safety of Mexican trucks crossing the border. DOT officials also noted that our draft report was completed approximately two weeks before the Congress passed the appropriations act and therefore the information contained in our report predates the requirements specified in the act that the Department must undertake before it can fully open the border. We have updated the report to reflect the requirements in the fiscal year 2002 DOT appropriations act—requirements that further highlight the importance of our recommendation that DOT develop a coordinated operational plan for truck safety at the Mexican border. We disagree with DOT’s comments that they are well advanced in their efforts to implement our recommendation as well as the many requirements contained in the appropriations act. Even prior to the act, DOT had not reached agreements with the states on how to allocate their inspectors or with other federal agencies on the space needed to conduct additional truck inspections. These are basic operational issues that have become more complex with new provisions in the appropriations act, such as the requirement for weigh in motion technologies at the 10 busiest border crossings. In addition, our concerns about DOT’s readiness were seconded in comments we received from Customs officials. Customs officials noted that, as of December 2001, they and GSA were still surveying federal facilities along the border to determine where additional space for DOT truck inspections could be found. The additional space becomes more important as a result of the act’s provisions for more inspectors, more inspections, and the heightened probability that more space would be needed for Mexican trucks placed out of service. Customs officials expressed continued concern that DOT has not fully developed adequate operational plans to conduct truck safety inspections at federal border facilities. State and agency officials also provided technical comments to the report. We incorporated these comments, where appropriate, throughout the report. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to congressional committees with responsibilities for trade and transportation safety issues; the Secretary of Transportation; the Secretary of State; the U.S. Trade Representative; the Commissioner of the U.S. Customs Service; the Administrator, Environmental Protection Agency; the Administrator, General Services Administration; and the Director, Office of Management and Budget. We will also make copies available to others upon request and on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979. Key contributors to this report are listed in appendix IV. To estimate the extent to which Mexican-domiciled commercial trucks are likely to travel beyond the commercial zones, as well as the factors inhibiting or encouraging Mexican carriers to operate beyond these zones, we contacted U.S. federal, state, and local officials, as well as trucking industry representatives, public interest groups, insurance companies and associations, and academics familiar with the Mexican trucking industry. We also reviewed applications filed with the Department of Transportation (DOT) by Mexican commercial motor carriers wishing to operate beyond the commercial zones. In addition, we interviewed Mexican government officials and industry and union representatives, and reviewed statistical data on the Mexican trucking industry. To obtain information on U.S. government agencies’ efforts to ensure that Mexican trucks entering the United States meet safety and emissions standards, we interviewed officials with the Federal Motor Carrier Safety Administration, Federal Highway Administration, Environmental Protection Agency, and the U.S. Trade Representative in Washington, D.C. We also interviewed state and local government officials in the border states and visited ports of entry in Laredo, Texas, Otay Mesa and Calexico, California, and Nogales, Arizona. We reviewed documents provided by DOT, attended congressional hearings on the issue, and reviewed DOT’s Notices of Proposed Rulemakings and comments regarding the entry of Mexican trucks into the United States. In addition, we reviewed data contained in DOT’s motor carrier management information system to understand its reliability and limitations. To understand how Mexican government and private sector efforts contribute to ensuring that Mexican commercial vehicles entering the United States meet U.S. safety and emissions standards, we met with officials from the Mexican Embassy in Washington, D.C., as well as the Secretariats of Communication and Transportation, Economy, External Relations, and Environment and Natural Resources in Mexico City. We also reviewed Mexico’s regulations dealing with commercial vehicle safety and emissions. However, because of time constraints we were unable to visit the inspection facilities under construction or observe enforcement actions taking place. To understand how the U.S. and Mexican commercial vehicle and driver safety databases function and interconnect, we met with officials from TML, the private contractor working to develop and connect these databases, as well as U.S. and Mexican government officials. We also observed the databases in use in Laredo, Texas; Otay Mesa, California; and Mexico City. To understand the private sector’s efforts to improve the safety of their vehicles and their compliance with U.S. safety and emissions standards, we met with Mexican government and private sector officials, toured a large Mexican trucking firm interested in conducting operations beyond the commercial zones, and visited a privately funded inspection facility in Nuevo Laredo, Mexico. To describe efforts to harmonize safety and emissions standards, we attended a conference co-sponsored by the United States and Mexico dealing with vehicle safety and emissions standards, and interviewed DOT and Mexican officials involved in the Land Transportation Standards Subcommittee and other groups. We conducted our work in accordance with generally accepted government auditing standards from June to November 2001. The U.S. Customs Service allows state and federal truck inspectors to inspect trucks on its compounds. However, because interagency agreements among FMCSA, GSA, and Customs have not been established, space at such locations is temporary and available only as long as Customs allows its continued use. The exception is in California, where the state operates two permanent truck inspection facilities-–Calexico and Otay Mesa–that are located just outside the federal ports of entry. Truck inspection activities do not occur at the federal facilities in California. Table 3 provides an overview of the amount of space currently designated for truck inspection activities at commercial ports of entry along the southwest border. In addition to the person listed above, Jason Bair; Patricia Cazares-Chao; Janey Cohen; Peter Guerrero; Elizabeth McNally; Jose M. Pena, III; Sarah Prehoda; James Ratzenberger; Maria Santos; and Hector Wong made key contributions to this report.
The North American Free Trade Agreement (NAFTA) allowed Mexican commercial trucks to travel throughout the United States. Because of concerns about the safety of these vehicles, the United States has limited Mexican truck operations to commercial zones near the border. Relatively few Mexican carriers are expected to operate beyond these commercial zones once the United States fully opens its highways to Mexican carriers. Specific regulatory and economic factors that may limit the number of Mexican carriers operating beyond the commercial zones include (1) the lack of established business relationships beyond the U.S. commercial zones that permit drivers to return to Mexico carrying cargo, (2) difficulties obtaining competitively priced insurance, (3) congestion and delays in crossing the U.S.-Mexico border that make long-haul operations less profitable, and (4) high registration fees. Over time, improvements in trucking and border operations may increase the number of Mexican commercial vehicles traveling beyond the commercial zones. GAO found that the Department of Transportation (DOT) lacks a fully developed or approved plan to ensure that Mexican-domiciled carriers comply with U.S. safety standards. DOT has not secured permanent space at any of the 25 southwest border ports of entry where commercial trucks enter the United States, and only California has established permanent inspection facilities. DOT also has not completed agreements with border states on how 58 federal inspectors and 89 state inspectors will share inspection responsibilities along the border. States are responsible for ensuring that Mexican trucks adhere to U.S. emissions standards. California is the only southwest border state with a truck emissions inspection program in place at the border. Although the Mexican government has developed truck safety regulations and taken steps to enforce safety and air emissions standards, these efforts are relatively recent and it is too early to assess their effectiveness. With DOT's support, Mexico has developed five databases on the safety records of its commercial drivers and motor carriers. As of October 2001, however, the commercial driver's license database covered less than one-quarter of Mexico's commercial drivers. Mexico has also participated in NAFTA-related efforts to make motor carrier safety regulations compatible across the three member nations. Mexican private industry has also sought to improve the safety of Mexican commercial vehicles.
BSEE’s mission is to promote safety, protect the environment, and conserve resources offshore through vigorous regulatory oversight and enforcement. BSEE’s headquarters—located in Washington, D.C., and Sterling, Virginia—is responsible for setting national program policy to meet the bureau’s mission. BSEE’s three regional offices—the Gulf of Mexico regional office in New Orleans, Louisiana; the Pacific regional office in Camarillo, California; and the Alaska regional office in Anchorage, Alaska—are responsible for executing oversight of oil and gas activities, such as conducting inspections of all facilities on the OCS. The five district offices that the Gulf of Mexico regional office oversees are the Houma, Louisiana; Lake Jackson, Texas; Lafayette, Louisiana; Lake Charles, Louisiana; and New Orleans, Louisiana district offices. The Outer Continental Shelf Lands Act of 1953, as amended, (OCSLA) requires Interior to inspect each offshore oil and gas facility at least once per year. OCSLA also authorizes Interior to conduct periodic unscheduled—unannounced—inspections of these facilities. BSEE carries out these inspections on behalf of the Secretary throughout America’s 1.7 billion acres of the OCS. BSEE’s Office of Offshore Regulatory Programs is responsible for overseeing the bureau’s national inspection program, which is carried out by the bureau’s regional offices. During inspections, BSEE inspectors scrutinize all safety system components designed to prevent or ameliorate blowouts, fires, spillages, or other major accidents. Additionally, inspectors check for compliance with current plans, lease terms, and appropriate stipulations. During inspections, BSEE inspectors check for installation, operation, and maintenance of all appropriate safety and antipollution devices. They perform the inspections, in part, by using a checklist derived from regulated safety and environmental requirements. If an inspector identifies a regulatory violation at an offshore facility, BSEE issues a citation to the operator known as an incident of noncompliance (INC) in response to operator violations of safety or environmental standards. An INC may be issued in the form of (1) a warning, (2) an order to shut down a particular component of the facility (when it can be shut down without affecting the overall safety of the facility or operations), or (3) an order to shut down an entire drilling rig or production platform in cases when the violation could result in serious consequences to the environment or human health and safety, such as a fire or spill. Operators generally have 20 days to correct the violation and notify Interior that the violation was corrected. BSEE is responsible for ensuring compliance with OCSLA and provisions of other federal laws, including the National Environmental Policy Act (NEPA). BSEE’s Environmental Compliance Division establishes national strategic goals, programs, and procedures to increase the accuracy, effectiveness, and consistency of all bureau environmental compliance policies and initiatives. BSEE’s Office of Environmental Compliance, located in the Gulf of Mexico regional office, is staffed by environmental engineers, scientists, and specialists who are responsible for BSEE’s NEPA compliance program, as well as field and office environmental compliance verification. We have previously reported on Interior’s challenges with managing federal oil and gas resources. In September 2008 and July 2009, we found shortcomings in Interior’s ability to ensure that royalty payment data were reasonable and complete. In addition, in March 2010, we found that Interior’s policies and practices did not provide reasonable assurance that oil and gas produced from federal leases was being accurately measured and that Interior experienced challenges hiring, training, and retaining qualified staff to provide oversight and management of oil and gas operations on federal lands and waters. Further, we have reported that organizational transformations are not simple endeavors and require the concentrated efforts of both leaders and employees to realize intended synergies and accomplish new organizational goals. We were also concerned about Interior’s ability to balance continued delivery of services with transformational activities in view of the department’s history of management problems and challenges in the human capital area. In December 2015, BSEE issued its Fiscal Year 2016–2019 Strategic Plan. BSEE’s strategic plan identifies strategic goals to improve its operations—including safety and environmental oversight—as well as its internal management. BSEE’s key strategic initiatives to improve safety and environmental oversight include developing a risk-based inspections program and promoting environmental stewardship. BSEE’s key strategic initiatives to improve its internal management include enhancing decision making as well as communication and transparency. BSEE leadership has started several initiatives to improve its safety and environmental oversight capabilities but its limited efforts to obtain and incorporate input from within the bureau have hindered its progress. Since 2012, BSEE has sought to augment its annual inspection program with a risk-based inspection program, but limited efforts to obtain and incorporate input from experienced regional personnel have hindered BSEE’s ability to develop and implement the risk-based program. Additionally, in 2016, BSEE conducted an environmental stewardship initiative comprised of two simultaneous environmental risk reduction efforts, but these efforts were overlapping, fragmented, and uncoordinated, which reduced the effectiveness of the initiative and hindered the implementation of identified improvements. Since it was established as a separate bureau in 2011, BSEE leadership has continued an initiative begun by its predecessor to transition the bureau’s inspection program to a risk-based approach. In 2012, BSEE leadership started a new initiative that included the development of a risk model and an approach for inspecting production facilities based on the risk they pose. However, BSEE leadership’s limited efforts to obtain and incorporate input from regional staff and management during development of the program led to poor pilot results. As a result, BSEE has changed the focus of the program and reduced expectations for its initial approach to risk-based inspections. Interior’s efforts to conduct oversight based on risk date back to the 1990s. In 1998, MMS, BSEE’s predecessor organization, contracted for a study from Carnegie Mellon University to develop a model to target inspections of offshore facilities based on risk. MMS did not implement the model at the time because it was too complex, according to BSEE officials. In 2009, one year prior to the Deepwater Horizon incident and the dissolution of the MMS in 2010, the Gulf of Mexico Regional Office piloted a risk-based inspection strategy in the Houma, Louisiana and Lake Jackson, Texas districts that regional management recommended for immediate implementation. However, BSEE officials told us that the 2010 Deepwater Horizon incident and Interior’s 2010 Safety and Environmental Management System (SEMS) regulation prompted the bureau to reconsider approaches to conducting risk-based inspections. Since 2011, when it was established as the successor to MMS, BSEE has highlighted in every Interior budget justification for the bureau its ongoing efforts to identify and increase oversight of the highest-risk facilities and operators. Additionally, BSEE affirmed its intentions in its 2016-2019 Strategic Plan to develop this risk-based inspection capability as part of its National Inspection Program. Beginning in 2012, BSEE began an initiative to develop an approach for conducting inspections of offshore facilities based on the level of risks they posed. Specifically, BSEE engaged Argonne National Laboratory (Argonne) to develop a quantitative model to serve as the foundation of BSEE’s risk-based inspection capability. The model ranks offshore production platforms according to five indicator factors: (1) whether the facility is a major complex, (2) whether the facility’s slot count is 15 or greater, (3) the number of inspections resulting in an INC in the previous year, (4) whether the facility experienced an incident—such as an explosion, fire, fatality, or injury—in the previous year, and (5) whether the facility experienced an incident in the previous 2 years. BSEE intended to use risk-based inspections to augment the required annual inspections by using the results of the Argonne model to identify facilities for supplemental multi-day inspections focusing on each facility’s risk management strategies. According to 2015 BSEE documentation on its risk-based approach, the bureau planned to eventually shift inspection resources from lower-risk facilities to higher-risk facilities and transition the overall inspection program from annual compliance inspections to a risk-based approach to more effectively use BSEE’s available inspection resources. However, to date, BSEE has not successfully implemented this supplemental risk-based inspection capability in the 5 years since taking over the initiative from MMS. BSEE leadership led the development of the risk-based program; however, according to officials, leadership developed the program with little input from regional personnel. Officials in the Gulf of Mexico region with knowledge and experience conducting previous risk-based inspection efforts told us they were not apprised of key program products until those products were well under development and were given little opportunity to provide comment on them. As a result, BSEE first identified deficiencies with its risk-based program during pilot testing in 2015, rather than working closely with experienced regional personnel earlier in the process to obtain their input to identify potential deficiencies and remediate them during program development. For example, BSEE identified deficiencies in three components of its proposed inspection program: (1) an underlying risk model for ranking all production platforms, (2) the annual inspection planning methodology, and (3) the facility-specific inspection protocol. Risk Model. BSEE regional officials who have longstanding experience evaluating offshore risk told us that the model is not sophisticated enough to identify platforms for risk-based inspection planning, and that they could have identified its deficiencies earlier in the program development process. Specifically, they said that the model does not contain sufficient information to target facilities for additional risk-based inspections. For example, Argonne’s model does not incorporate risk factors such as a facility’s change in ownership status or operator bankruptcy—factors that BSEE regional officials told us can be correlated with higher risk, as operators tend to reduce expenditures on maintenance at these times. Additionally, the model does not account for the severity of incidents of noncompliance—for example, whether an incident results in shutting down a facility or a warning—or the quantity assessed—such as whether a facility was cited many times or once in a single inspection. Some BSEE regional officials considered these types of operator performance and risk- related intelligence to be as, or more, important for identifying high- risk facilities than the five factors assessed by the model. BSEE headquarters worked directly with Argonne on the risk model, and although headquarters officials said they included regional personnel, they did not provide us with evidence of efforts they made to include those personnel or obtain their input on the risk model’s initial development. BSEE headquarters officials told us that Argonne reached out periodically to senior regional personnel, but they did not specify when the laboratory conducted such outreach, what contributions regional personnel made, or whether regional personnel raised concerns during Argonne’s outreach. Conversely, BSEE regional personnel told us that BSEE headquarters did not inform them of the development of a risk model or ask them for input leading up to the pilot. Inspection Planning Methodology. In 2015, BSEE outlined an inspection planning methodology founded on Argonne’s quantitative risk model that describes how BSEE would target and plan supplemental safety inspections for offshore production platforms. BSEE’s inspection planning methodology prescribes the use of two additional categories of information, alongside Argonne’s model, to select production platforms for supplemental risk-based inspections. Specifically, it states that BSEE would use the model’s ranking to identify the 20 percent of platforms that pose the highest risk. BSEE would then consider information on operator performance—reported hydrocarbon releases, number of incidents of noncompliance assessed in each category, and the quality of SEMS audit reports— and other risk-related intelligence—including proximity to shore, production rates, and inspector assessment of overall safety—to further narrow the selection of high-risk facilities. BSEE planned to test its inspection planning methodology by selecting and conducting five pilot inspections in late 2015 and early 2016. According to BSEE’s program deployment and implementation plan, the bureau applied Argonne’s model to identify the pilot inspections in the Lafayette district. However, although BSEE’s inspection planning methodology prescribed the incorporation of additional information on operator performance and other risk-related intelligence in its selection of pilot facilities, a BSEE regional official told us that during the Risk Based Oversight Team’s discussions, BSEE leadership relied heavily on the risk model alone. Furthermore, although regional personnel participated on the Risk Based Oversight Team when it selected the pilots, a regional official told us they were largely sidelined during the discussions. As a result, regional officials told us the pilot selections were not among the highest risk facilities. For example, three of the top five facilities BSEE selected were idle and not producing and therefore were not inspected as part of the pilot. By going against BSEE’s inspection planning methodology, BSEE leadership appears to have excluded the input of regional personnel, undercutting the pilot effort and raising questions about whether the bureau’s leadership has the commitment necessary to enable the successful implementation of its risk-based program. Inspection Protocol. BSEE’s inspection planning methodology also specified that the Risk-Based Oversight Team should develop an inspection protocol in advance of conducting risk-based inspections that is tailored to each facility and describes the roles and responsibilities of personnel, including what components and safety systems will be reviewed or tested. Additionally, BSEE’s program methodology describes the protocol for deliverables and the dissemination of the inspection results. However, BSEE did not establish a clear pilot inspection protocol for the inspection team and operator for the first pilot, which led to confusion for BSEE personnel and the operator. Specifically, BSEE officials involved in the inspection told us that headquarters did not inform inspection team members of their responsibilities, resulting in ineffective use of time. In turn, for the second pilot inspection, BSEE officials told us that BSEE leadership asked regional personnel to develop the inspection protocol. Officials told us that the second pilot inspection was an improvement over the first because personnel were better prepared to carry out their responsibilities. However, officials said the inspection proved to be more time consuming than BSEE expected, particularly when compared to the time required to conduct a typical annual inspection. Specifically, according to one official, the inspection team needed between 500 and 600 total work hours to complete the pilot inspection, in part due to the time required in developing a facility-specific protocol. For comparison, the official told us that a typical annual inspection of a deep water platform requires about 100 total work hours. In addition, the official told us that annual inspections are a more comprehensive review of a facility’s safety systems because inspectors test and validate all necessary components, whereas a risk-based inspection considers only specific aspects of safety performance culture. Therefore, it is not clear whether risk-based inspections, as performed during the pilot, have proven to be a more effective method for evaluating safety relative to annual inspections. Additionally, BSEE’s inspection planning methodology prescribes that the Risk-Based Oversight Team provide final pilot reports to the operators of the facilities at the earliest opportunity. However, according to officials, BSEE did not provide the operator of the first pilot facility with a report of its findings. Similarly, they said BSEE did not provide a report to the operator of the second pilot facility, only a verbal debrief until the operator requested a report through BSEE’s regional office. Because BSEE did not provide formal reports to operators included in the both pilots in a timely manner, a BSEE debrief noted that one of the operators was confused about the final results of the inspection. The purpose of the risk-based inspection initiative is to provide operators with the opportunity to address issues and improve their safety management systems, for which they need timely access to inspection results. BSEE headquarters led the development of the inspection planning methodology and the facility-specific inspection protocol without obtaining and incorporating input from regional personnel who had knowledge and experience conducting risk-based inspection efforts. The Gulf of Mexico region was to evaluate risk routinely when planning inspections since at least the year 2000, because BSEE’s inspection policy stipulates that the region is to conduct supplemental unannounced inspections based on a quantitative and qualitative assessment of risk. In addition, BSEE’s inspection policy states that the bureau is to evaluate quantitative and qualitative risk assessment criteria to determine whether a facility’s annual scheduled inspection should be a complete inspection or an inspection of a selected sample of safety components. Furthermore, personnel from that region conducted a risk-based inspection pilot in 2009 in the Houma, Louisiana, and Lake Jackson, Texas, districts that regional management recommended for immediate implementation. Nevertheless, regional officials who had knowledge of the 2009 pilot said that BSEE headquarters led and developed the first facility- specific inspection protocol without their input. According to officials, BSEE headquarters proceeded with pilot inspections before regional personnel had the opportunity to raise concerns about the risk model, the inspection planning methodology, and the facility-specific inspection protocol. As a result of these deficiencies, officials involved in the first pilot inspection described it as a failure that produced few, if any, results. Only after the first pilot did BSEE leadership begin to engage regional personnel and incorporate their input on the program, according to officials. In response to the deficiencies BSEE identified during the first two risk-based pilot inspections, in July 2016, BSEE revised the risk-based inspection program based on a proposal that regional personnel told us they developed, which incorporates a risk- based methodology that they had previously used in the Gulf of Mexico. Specifically, to supplement the facility-based approach that BSEE leadership had been developing since 2012 based on Argonne’s risk model, BSEE regional personnel proposed reconstituting an inspection methodology that MMS used prior to the Deepwater Horizon incident called “blitz” inspections. Blitz inspections focus on specific facility components—such as compressors, generators, or cranes—that the bureau determines are high-risk based on analyses of trends in incidents. Officials told us that they added this tier of inspections because it allowed them to target risk across more facilities in less time than is required for comprehensive risk-based facility inspections. Specifically, BSEE intends for a typical round of blitz inspections to encompass approximately 50 facilities for 2 to 3 hours each. Under the initial program methodology developed by BSEE leadership, BSEE stated that it would be able to use the facility-based methodology as a systematic way of deciding where to commit annual inspection resources. However, officials said the bureau now anticipates using the risk-based methodology to target no more than five facilities per year, instead of the more than 20 per year officials originally estimated. Instead, BSEE’s revised program methodology will use both blitz inspections and facility-based inspections based on Argonne’s model. BSEE’s current plans are to conduct additional pilots under the revised program methodology prior to implementation in fiscal year 2017. In July 2003, we found that when implementing large-scale management initiatives, a key practice is involving employees to obtain their ideas and gain their ownership by incorporating employee feedback into new policies and procedures. We found that employee involvement strengthens the process and allows them to share their experiences and shape policies, and that in leading organizations, management and employee representatives work collaboratively to gain ownership for these changes. Further, management’s responsibility to develop policy and programs in a collaborative manner is established in both BSEE’s internal policy and federal internal control standards. BSEE’s inspection policy states that headquarters is responsible for coordinating the development of national inspection policy, including taking into account region-specific circumstances. BSEE regional leadership is responsible for administering and implementing the inspection policy; therefore, logically, regional leadership would be a key contributor to helping develop BSEE inspection policy. In addition, under the Standards for Internal Control in the Federal Government, management should internally communicate the necessary quality information to achieve its objectives. For example, quality information is communicated down, across, up, and around reporting lines to all levels of the entity, and management receives such information about the entity’s operational processes that flows up the reporting lines from personnel to help management achieve the entity’s objectives. Therefore, systematic input from within the entity would help it achieve its objectives. However, BSEE management made limited efforts to obtain and incorporate input from regional personnel in developing the three components of the risk-based inspection program, which contributed to deficiencies that led to an unsuccessful pilot, and ultimately, BSEE has been unable to achieve its goal of implementing a systematic risk-based inspection program. Without an Interior organizational unit at a higher level than BSEE (i.e., higher level oversight independent from BSEE) establishing a mechanism for BSEE management to obtain and incorporate input from personnel within the bureau, BSEE’s risk-based inspection program could experience continued delays and implementation problems. BSEE leadership initiated two simultaneous Environmental Stewardship efforts to reduce environmental risks related to U.S. offshore oil and gas operations, but the efforts were partially overlapping, fragmented, and uncoordinated, which reduced the value of the outputs. In 2015, BSEE leadership sought to establish a baseline for environmental risks associated with U.S. offshore oil and gas operations and measure the effectiveness of its environmental protection functions and environmental stewardship priorities to better implement BSEE’s mission. These efforts included (1) identifying potential environmental risks associated with offshore oil and gas operations; (2) identifying current BSEE functions meant to regulate and manage those risks; (3) linking BSEE environmental stewardship priorities to specific industry activities and associated risks; and (4) identifying potential environmental stewardship gaps where BSEE functions might not be fully addressing industry activities with high environmental risk. These efforts were led and coordinated by BSEE leadership in the Environmental Compliance Division at headquarters, which BSEE created in 2015 to establish national strategic goals and procedures for the bureau’s environmental compliance activities. As part of the Environmental Stewardship initiative, BSEE conducted two environmental risk reduction efforts. Specifically, in December 2015, BSEE headquarters engaged Argonne to conduct an Environmental Risk Assessment, and in February 2016, established an internal Environmental Stewardship Collaboration Core Group (Core Group) comprised of BSEE personnel. In July 2016, both Argonne and the Core Group produced final reports summarizing their findings. Both reports found that some of BSEE’s activities, such as those focused on safety oversight, were not clearly linked to environmental stewardship. Additionally, Argonne also reported that some environmental protection and stewardship activities are not described in sufficient detail in BSEE regulations, policies, and interagency agreements. Argonne recommended that BSEE clarify functions that primarily focus on safety to explicitly identify environmental protection as an aspect of safe operations. Likewise, the Core Group found that some programs’ relationships to environmental stewardship might not always be readily apparent to program staff or more broadly within BSEE. The efforts were overlapping because BSEE leadership tasked both Argonne and the Core Group with the same five objectives to identify: (1) linkages and gaps in BSEE’s environmental stewardship of offshore oil and gas operations, (2) all environmental risks in offshore oil and gas operations, (3) mitigations already in place to reduce the identified environmental risks, (4) stewardship priorities for the Environmental Compliance Division; and (5) opportunities for improvement of BSEE environmental stewardship. However, the efforts were also fragmented because BSEE leadership did not effectively coordinate the execution of these efforts, which hindered information sharing between Argonne and the Core Group that could have enhanced the value of each effort’s report. Instead, both efforts were executed simultaneously with little evidence of information sharing or communication. For example, Argonne presented its work at the Core Group’s initial meeting in February 2016; however, at that time, Argonne had not yet completed the majority of its contracted tasks. BSEE officials involved in the Core Group also told us that Argonne did not contribute to the Core Group activities throughout the effort. According to BSEE officials, Argonne’s findings were added to the Core Group report by bureau leadership following the completion of the Core Group’s assessment and without discussion or assessment by Core Group team members. Similarly, some officials involved in the Core Group said that BSEE headquarters did not communicate the objectives of the Argonne effort, thereby limiting the ability of the Core Group to coordinate with Argonne to maximize its results. Furthermore, Argonne did not have access to bureau information and personnel that could have enhanced its efforts. Argonne’s report stated that BSEE regional experts had information and technical knowledge that could be used to review their assumptions and to identify additional industry activities for analysis. Argonne also stated that it may have over- or underestimated potential risks, and did not determine the effectiveness of BSEE’s environmental stewardship activities. In turn, Argonne recommended BSEE regional subject matter experts review its analysis regarding the assumptions used in the risk evaluation and repeat the risk characterization using parameters that regional experts determine to be more appropriate. Because Argonne was aware of the limitations of its assessments, Argonne recommended that BSEE regional experts redo and validate these assessments. In addition to its report, Argonne provided the bureau with a spreadsheet-based risk assessment tool for BSEE to use during office verification and field monitoring. However, given Argonne’s concern about the accuracy of its analysis, BSEE plans to review and verify Argonne’s work. In addition to its report, the Core Group established a bureau-wide definition for environmental stewardship and BSEE leadership drafted three work plans. The work plans include one plan to promote environmental stewardship on a continuous basis, one work plan to redo Argonne’s analysis, and another to create a manual with environmental compliance standard operating procedures for several of its core functions. BSEE anticipates that this work will be ready for management review in January 2017. BSEE headquarters was responsible for coordinating with Argonne officials to ensure they had access to BSEE subject matter experts during the assessment, especially for the risk characterization and ranking task. Because effective coordination did not occur, the resources used to do these two simultaneous analyses were not used efficiently. BSEE’s National Environmental Compliance Policy calls for coordination within the bureau when developing national policies and procedures. When BSEE initiated these efforts, bureau policy stated that communication and coordination within the Bureau and with external stakeholders is an essential component of success for its environmental division. In April 2016, BSEE updated its national policy but maintained an emphasis on good coordination across the bureau. Specifically, the current policy states that the Environmental Compliance Division collaborates within the bureau on national efforts to develop goals and policies. Furthermore, communication is an element of good federal internal controls. Under the Standards for Internal Control in the Federal Government management should internally and externally communicate the necessary quality information to achieve the entity’s objectives. Because BSEE management tasked both environmental risk response efforts with the same objectives and did not effectively communicate information to coordinate the efforts, the efforts overlapped and ultimately delivered few results that BSEE can implement immediately. Without higher level oversight within Interior establishing a mechanism for BSEE management to obtain and incorporate input from personnel within the bureau and any external parties, such as Argonne, that can affect the bureau’s ability to achieve its objectives, BSEE’s Environmental Stewardship efforts are likely to experience continued implementation and efficacy problems. Since 2013, BSEE began four strategic initiatives to improve its internal management, but their successful implementation has been hindered by limited leadership commitment and not addressing factors contributing to trust concerns. In 2013 and 2014, BSEE leadership began initiatives— development of an enterprise risk management framework and performance measures, respectively—to improve its decision making capabilities—but has not fully implemented them. By not fully implementing internal management initiatives, BSEE management demonstrates limited leadership commitment. In 2016, BSEE conducted initiatives—an employee engagement effort and an assessment of its Integrity and Professional Responsibility Advisor—to enhance communication and transparency, but these do not address key factors that contribute to long-standing trust concerns within the bureau. BSEE leadership began initiatives to improve bureau internal management capabilities but has not fully implemented them. In 2013, BSEE began an initiative to develop an ERM framework but has not fully implemented it as a management tool. In 2014, BSEE began an initiative to develop performance measures for its programs but has not implemented any measures. BSEE has made some progress over the past 3 years in implementing an ERM framework but has not completed the actions necessary to fully implement it. In 2013, BSEE began an initiative to develop and implement an ERM framework to provide enduring management of internal and external risks that threaten achievement of BSEE’s mission. The Office of Management and Budget defines ERM as an effective agency-wide approach to addressing the full spectrum of the organization’s risks by understanding the combined impacts of risks as an interrelated portfolio, rather than addressing risks only within silos (i.e., viewing problems in isolation). BSEE’s Fiscal Year 2016-2019 Strategic Plan identifies the integration of enterprise risk management into bureau-wide decision making as a key initiative to meet BSEE’s strategic goal to enhance decision making through the collection, management, and analysis of high-quality information. In conjunction with a contracted ERM support consultant, BSEE developed an iterative ERM cycle that includes six steps: (1) establish an ERM program, (2) identify individual risks and group them into strategic risks, (3) prioritize risks, (4) develop risk treatments, (5) implement selected risk treatments, and (6) monitor performance. BSEE completed the first three of these six steps in its iterative ERM cycle. BSEE officials told us that they had taken actions on the other three steps. Specifically: 1. Establish an ERM program: BSEE established an ERM charter in 2014 and drafted an ERM Handbook and Bureau Manual Chapter to guide ERM activities in April 2016 but has not finalized or distributed them throughout the bureau. 2. Identify individual risks and group them into strategic risks: In 2014, BSEE identified 12 strategic risks that cover the lifecycle of BSEE operations. 3. Prioritize risks: In 2014, the bureau prioritized its strategic risks, according to BSEE ERM planning documentation. BSEE assessed each strategic risk by evaluating the potential severity and likelihood of a failure event occurring and ranked them based on the results. 4. Develop risk treatments: BSEE planned to verify the prioritization of its top several strategic risk treatments by July 2016 but did not do so. BSEE officials told us that the bureau halted ERM implementation while it acquired automated ERM software. However, in November 2016, BSEE determined that it would reinitiate ERM implementation simultaneous to the implementation of software. BSEE now plans to complete evaluation of risk treatments in March 2017. 5. Implement selected risk treatments: BSEE planned to finalize a plan for its prioritized risk treatments by August 2016 but did not do so because of the aforementioned temporary halt to ERM implementation. BSEE officials told us that the bureau has implemented some risk treatments. BSEE now plans to finalize its risk treatment plan in March 2017. 6. Monitor performance: BSEE plans to begin monitoring the performance of its risk treatments following their implementation. BSEE intended to promulgate a monitoring plan by October 2016 but did not do so because of the aforementioned temporary halt to ERM implementation. BSEE now plans to complete its monitoring plan in March 2017. As part of its ERM initiative, BSEE is assessing the risks posed by its relationships with other agencies. BSEE’s Fiscal Year 2016-2019 Strategic Plan identified reviewing the efficacy and implementation of current interagency relationships as a key initiative to support its strategic goal of maintaining productive relationships with external entities. In 2015, BSEE’s ERM support consultant assessed existing interagency relationships by prioritizing the 35 known memorandums of agreement, understanding, and collaboration based on the risk exposure they pose to the bureau. Of these 35 interagency memorandums, BSEE’s consultant determined that 11 created a significant or moderate increase in risk exposure to the bureau. For example, the consultant determined that a memorandum of understanding with BOEM to carry out assigned responsibilities under the agreement between the U.S. and Mexico concerning transboundary hydrocarbon reservoirs in the Gulf of Mexico created the greatest risk exposure to BSEE. BSEE has developed a plan to update one of these agreements but has not developed any specific plans to complete revisions for the other 10. In 2016, BSEE began developing a systematic process for lifecycle management of interagency agreements and to improve the bureau’s awareness of existing agreements and their implementation status, among other things. For example, BSEE has developed four criteria for prioritizing interagency agreements in need of update. BSEE also identified additional interagency agreements not identified by the bureau’s ERM consultant. BSEE planned to assess and prioritize the risks posed by these newly discovered agreements by October 2016, but the bureau now plans to do so in March 2017. BSEE also plans to implement a bureau manual chapter and handbook that outlines a lifecycle interagency agreement management process in June 2017. Since 2012, BSEE has highlighted the need to develop and implement performance measures to inform management decision making. Specifically, BSEE’s October 2012 Strategic Plan - Fiscal Years 2012- 2015 stated that the bureau must develop performance measures to assess the results of its programmatic efforts as well as its ability to reduce the risks of environmental damage and accidents. Additionally, the October 2013 Director’s Intent message—which outlined the BSEE Director’s multi-year priorities—reaffirmed this need, stating that BSEE must measure to make informed management decisions and that to do so it must set key performance targets and measures, consistent with its strategic plan, and use them to guide its actions and decisions. BSEE’s initiative to develop performance measures has been comprised of three sequential efforts, none of which have resulted in the implementation of performance measures. In July 2014, the bureau initiated the first of three formal efforts to develop performance measures. Specifically, BSEE contracted with a consultant to reassess its existing performance management system and update it as needed to ensure managers can make informed data-driven decisions. However, BSEE terminated the contract in January 2015 because, according to BSEE officials, leadership determined that the bureau needed to complete its ongoing internal organizational restructuring prior to developing programmatic performance measures. In December 2015, BSEE began its second effort, using the same consultant under a separate contract to develop performance measures for the national programs it established during its organizational restructuring—investigations, environmental compliance, and enforcement, as well as its Integrity and Professional Responsibility Advisor (IPRA). Specifically, the contract stipulated that the consultant analyze program objectives and components, develop potential performance measures, identify data sources and data collection requirements, and coordinate with BSEE officials to establish objectives for each measure. In March 2016, the consultant delivered a report to BSEE that identified 12 performance measures—5 for investigations, 3 for environmental compliance, 2 for enforcement, and 2 for the IPRA. However, BSEE headquarters officials told us that they are not implementing the measures and plans developed by the consultant due to a variety of factors, such as data availability limitations. For example, one proposed measure included a methodology to assess the effectiveness of issuing civil penalties to operators for safety or environmental infractions as a deterrent to committing future infractions. However, BSEE headquarters officials stated that the bureau does not issue enough civil penalties to conduct such an assessment—that is, the universe of available data to assess is too small. BSEE headquarters officials told us that the bureau did not implement the consultant-developed measures, but rather that those measures are informing BSEE’s third effort to develop performance measures. In 2016, BSEE initiated a third effort to develop performance measures by providing a framework for considering performance management. Specifically, in January 2016—simultaneous to the aforementioned consultant’s performance measure development effort—BSEE finalized a fiscal year 2016 work plan for the implementation of a revised performance management framework to include the identification of performance measures to help leadership gauge progress against the bureau’s strategic plan. BSEE headquarters officials told us that this initiative, which is being conducted internally by BSEE personnel, represents the beginning of a multi-year effort to implement a performance management system. BSEE initially planned to finalize its internally-developed list of performance measures in February 2016, but did not meet this deadline. Additionally, BSEE headquarters officials told us that in June 2016, the bureau narrowed the scope of the initiative from a comprehensive set of performance measures to no more than three performances measures per program. These officials explained that this was a more feasible scope given the difficulties in obtaining management commitment as well as the technical complexity of the initiative. As of August 2016, BSEE had developed 17 draft performance measures, but bureau leadership has repeatedly missed deadlines to review them. BSEE headquarters officials told us that, subsequent to leadership approval, the bureau plans to pilot these measures and develop others in upcoming years. In December 2016, BSEE completed a fiscal year 2016 Baseline Performance Measure Report that discusses these 17 measures and the bureau’s plans for future iterations of their development. We have previously reported on BSEE’s struggles to effectively implement internal management initiatives. Specifically, in February 2016, we found that since its inception in 2011, BSEE had made limited progress in enhancing the bureau’s investigative, environmental compliance, and enforcement capabilities. More than 2 years into its restructuring effort—and more than 5 years after the Deepwater Horizon incident—the bureau had not completed the underlying policies and procedures to facilitate the implementation of its national programs for these three capabilities. Moreover, we found that BSEE continues to face deficiencies in each of these capabilities that undermine its ability to effectively oversee offshore oil and gas development. As a result, among other things, we recommended that Interior direct BSEE to complete the policies and procedures for these three capabilities. Interior agreed that additional reforms—such as documented policies and procedures—are needed to address offshore oil and gas oversight deficiencies, but Interior neither agreed nor disagreed with our recommendation. Likewise, with regard to its ongoing strategic initiatives, more than 3 years have passed since BSEE initiated the development of its ERM framework, and more than 2 years have passed since BSEE prioritized the strategic risks it faces. However, BSEE has yet to develop, implement, and monitor risk treatments to even the highest priority risks. Moreover, more than 4 years have passed since BSEE identified the development and implementation of performance measures as an organizational need. In that time, BSEE initiated several efforts to develop and implement such measures, and although BSEE has developed measures, it has yet to fully implement any. In our 2013 High-Risk update, because progress had been made in one of the three segments we identified in Interior’s Management of Federal Oil and Gas Resources on our 2011 High-Risk List—reorganization of its oversight of offshore oil and gas activities—we narrowed the scope of the high-risk area to focus on the remaining two segments (revenue collection and human capital). One of our five criteria for assessing whether an area can be removed from our high-risk list is leadership commitment— that is, demonstrated strong commitment and top leadership support. An example of leadership commitment is continuing oversight and accountability. In our 2015 High-Risk update, we determined that Interior had met our criteria for leadership commitment because Interior had implemented a number of strategies and corrective measures to help ensure the department collects its share of revenue from oil and gas produced on federal lands and waters and was developing a comprehensive approach to address its ongoing human capital challenges. However, BSEE leadership has not demonstrated continuing oversight and accountability for implementing internal management initiatives, as evidenced by its limited progress implementing key strategic initiatives as well as its inability to address long-standing oversight deficiencies. BSEE leadership has consistently stated that it prioritized internal management initiatives by citing their importance in strategic plans and budget justifications. For example, BSEE’s fiscal year 2017 budget justification states that three key initiatives will inform the implementation of the bureau’s Fiscal Year 2016-2019 Strategic Plan: (1) refinement of a comprehensive set of output and outcome based performance measures; (2) implementation of an ERM framework to facilitate information sharing and identify the risk relationships among and within programs; and (3) implementation of its national program manager model to ensure consistency across regions. According to the budget justification, these initiatives support both effective decision making and assessment of BSEE’s progress in meeting its priorities. However, BSEE leadership has not fully implemented actions to demonstrate the commitment necessary to enable the successful implementation of such initiatives. Without higher-level oversight within Interior addressing leadership commitment deficiencies within BSEE—including by implementing internal management initiatives and ongoing strategic initiatives (e.g., ERM and performance measure initiatives)—in a timely manner, the bureau is unlikely to succeed in implementing internal management initiatives, including its key strategic initiatives for ERM and performance measures, in a timely manner. In 2016, BSEE conducted two initiatives—one on employee engagement and an assessment of its IPRA—to enhance communication and transparency, but these initiatives have not achieved results or addressed factors that contribute to trust concerns within the bureau. BSEE’s Fiscal Year 2016-2019 Strategic Plan discusses improving employee engagement—generally defined as the sense of purpose and commitment employees feel toward their employer and its mission—to foster a culture of collaboration within BSEE by, among other things, enhancing trust and implementing an internal communications approach that encourages dialogue and sets expectations for sharing accurate and timely information. A 2015 bureau strategic planning summary document stated that there is a lack of trust and respect between and among headquarters, regions, and the districts. Additionally, a 2013 BSEE internal evaluation found that some outside the bureau commented that BSEE does not appear to trust its own personnel. We have previously found that communication from management—as reflected by employee responses in the Federal Employee Viewpoint Survey—is one of the six strongest drivers of employee engagement. BSEE Federal Employee Viewpoint Survey data for 2013, 2014, and 2015 indicate that approximately one-third of BSEE respondents were not satisfied with information received from management regarding organizational activities (32.9, 31.1, and 32.9 percent, respectively). Likewise, less than half were satisfied with information received from management regarding organizational activities (41.7, 46.1, and 43.7 percent, respectively). According to some BSEE officials from across the bureau, the need to improve trust and communication are interconnected. Some senior BSEE officials throughout the organization told us that poor communication from headquarters has exacerbated trust issues between headquarters and the regions (including districts) that have existed since the 2010 Deepwater Horizon incident. As previously discussed, BSEE leadership’s safety and environmental stewardship initiatives have had limited success, largely due to poor communication and coordination between headquarters and the regions. BSEE officials from across the bureau told us that the poor communication between headquarters and the regions led to a deficit of trust vertically throughout the bureau. They also told us that because BSEE headquarters was newly established as part of the reorganization of MMS in 2010 following the Deepwater Horizon incident, there were not many existing relationships between headquarters and regional personnel. BSEE regional officials told us of specific examples in which BSEE headquarters did not communicate certain information to the regions, which has exacerbated the existing trust concerns, including the following examples: BSEE leadership reorganized its Pacific region with a structure that does not align directly with the bureau’s national program manager model and did not communicate the reasons why. One of the guiding principles of BSEE’s organizational restructuring was consistency, but limited communication regarding BSEE’s reorganization of its regions led some to believe that BSEE headquarters was not abiding by this principle. According to senior BSEE officials, the bureau restructured the Pacific Region—which includes 42 permanent full time equivalent positions—due to management problems with some personnel. They told us that to maintain an appearance of impartiality during the reorganization of the Pacific Region, BSEE contracted with a consultant to recommend a new organizational structure. In turn, the consultant recommended changes to address a lack of leadership and ineffective communication in the region, which BSEE officials told us influenced the new regional structure. However, this new structure does not include offices that correspond to the new national programs established during BSEE’s organizational restructuring— investigations, environmental compliance, and enforcement. BSEE leadership officials told us that the small relative size of the Pacific Region necessitated a unique structure. Conversely, the Gulf of Mexico region—an organization more than 10 times as large, with 454 full time equivalent positions—was restructured internally by BSEE personnel without relying on a consultant. Additionally, the Gulf of Mexico Region’s revised organizational structure aligns with the national program manager model implemented at BSEE headquarters (i.e., it has offices dedicated to the new national programs— investigations, environmental compliance, and enforcement). Some BSEE officials told us that they were unaware of leadership’s rationale for the differences in office structures because it was not communicated across the bureau. In turn, this lack of communication from headquarters led to confusion because regional personnel viewed it as inconsistent with the Director’s Intent for the restructuring, which contributed to trust concerns. BSEE headquarters did not notify the Gulf of Mexico Region when it advertised for two field-based positions located in the region to manage its SEMS program. According to BSEE regional officials, these positions would replicate functions that already existed in the Gulf of Mexico Region’s Office of Safety Management. Further, the reporting chain of these positions did not align with other actions taken during organizational restructuring, which emphasized consistency across the bureau. Specifically, these field-based positions would report to headquarters rather than regional leadership even though the Gulf of Mexico Region recently had undergone a restructuring to ensure that regional program offices report to regional leadership rather than headquarters. As a result, BSEE regional officials told us that headquarters’ actions to create new positions that would affect the region without notifying it contributed to the trust concerns of regional personnel. BSEE headquarters did not disseminate the final 2016 Environmental Stewardship Collaboration Core Group report to all group members, including representatives from the Office of Environmental Compliance, which is BSEE’s primary organization for conducting environmentally-focused oversight. As a result, BSEE operational personnel who could potentially benefit from the results of the working group were not advised of its final findings. In February 2016, BSEE announced an initiative to assess internal communications and develop an employee engagement strategy. The data collection plan for this employee engagement initiative focused on conducting outreach across the bureau to identify the means by which BSEE personnel prefer to receive information—for example, town hall meetings, BSEE’s website, or e-mail. BSEE conducted this outreach but as of November 2016 had not developed an employee engagement strategy—although its original target completion date was April 2016— and it is unclear when it will do so. In September 2016, BSEE decided to conduct a second round of outreach across the organization by spring 2017 to review feedback from the initial outreach, discuss next steps, and provide guidance on existing communications resources. Additionally, based on its initial outreach efforts, BSEE identified numerous interim projects to undertake while it develops its employee engagement strategy: redesigning the bureau’s intranet website, updating its online employee directory, briefing employees on employee engagement project findings, training on BSEE’s e-mail system, building staff interaction, and streamlining its staff onboarding process. However, BSEE headquarters officials told us that the bureau has not identified a plan with time frames for completion of these efforts. BSEE employee engagement initiative documentation identifies the need to enhance communication vertically and horizontally across the bureau, but it is unclear whether its employee engagement initiative will address the lack of quality information that BSEE officials told us undermines trust across the organization or set expectations for sharing accurate and timely information as called for by BSEE’s Fiscal Year 2016-2019 Strategic Plan. Under Standards for Internal Control in the Federal Government, management should internally communicate the necessary quality information to achieve the entity’s objectives. For example, management communicates such quality information down, across, up, and around reporting lines to all levels of the entity. However, it is unclear whether BSEE’s employee engagement initiative will do so because the scope of the effort has focused on means of communication rather than quality of information. Without expanding the scope of its employee engagement initiative to incorporate the need to communicate quality information throughout the bureau, BSEE’s employee engagement initiative might not address the lack of quality information being communicated throughout the bureau that is exacerbating trust concerns. The bureau’s IPRA is responsible for promptly and credibly responding to allegations or evidence of misconduct and unethical behavior by BSEE employees and coordinating its activities with other entities, such as the IG. Senior BSEE officials from across the bureau stated that the IPRA function is critical to bolstering trust within the bureau because personnel need to have a functioning mechanism to which they can report potential misconduct by other employees. However, some BSEE officials from across the bureau expressed concern regarding the IPRA’s process for adjudicating allegations of misconduct. To increase transparency and consistency in how IPRA cases are handled following the completion of an investigation report, BSEE conducted a pilot initiative in 2016 to assess the types of allegations of misconduct being reported to the IPRA as well as the frequency with which the IPRA referred such allegations to other entities. In August 2016, BSEE determined that the majority of incoming allegations are being directed to the appropriate office for action. However, BSEE’s pilot initiative did not address unclear and conflicting guidance that could undermine organizational trust in how the IPRA addresses allegations of misconduct. Specifically, the Interior Department Manual states that IPRA responsibilities include working with the IG on internal matters the IPRA investigates, pursuing certain administrative investigations with the IG’s consent and knowledge, and advising the IG of the status and results of IPRA investigations, as requested. Additionally, IPRA guidance stipulates that once an allegation is received, the IPRA Board—composed of the IPRA, the head of Human Resources, and the Deputy Director—will assess whether the allegation should be referred to the IG or other appropriate entity, investigated by the IPRA, or closed for no further action. Further, the IPRA told us that the IG has first right of refusal to investigate all allegations of misconduct within the bureau. However, the Interior Department Manual and IPRA guidance do not specify criteria for the severity thresholds for allegations that are to be referred to the IG. As a result, the boundaries of IPRA responsibility are unclear. Additionally, BSEE’s pilot initiative did not address IPRA guidance that conflicts with the reporting chain established by the Interior Department Manual and BSEE’s organization chart. Specifically, the Interior Department Manual and BSEE’s organization chart indicate that the IPRA reports to the BSEE Director. However, IPRA guidance also states that, for cases that are not accepted by the IG, an IPRA Board composed of the IPRA, the head of Human Resources, and the Deputy Director will assess whether the allegation should be referred, investigated by the IPRA, or closed for no further action. BSEE officials told us that, in practice, the IPRA makes determinations as stipulated by the IPRA guidance. In turn, this reporting structure—in which the IPRA Board determines how to proceed without consultation with the Director—does not align with the Interior Department Manual and BSEE organization chart. Some BSEE regional officials told us that the uncertainty of how the IPRA reports allegations to the IG as well as its reporting structure led them to question the independence of IPRA activities and expressed concern that the IPRA could be used to retaliate against employees, which has undermined organizational trust in its activities. Under the federal standards of internal control, management should design control activities to achieve objectives and respond to risks. For example, agencies are to clearly document internal controls, and the documentation may appear in management directives, administrative policies, or operating manuals. While BSEE has documented its policies, they are not clear, because (1) neither the IPRA guidance nor the Interior Department Manual specifies criteria for the severity thresholds for allegations that are to be referred to the IG and (2) the IPRA guidance does not align with the Interior Department Manual and BSEE organization chart concerning the IPRA reporting chain. Moreover, BSEE’s IPRA pilot initiative did not address the unclear and conflicting guidance regarding IPRA’s referral criteria and reporting chain, respectively. Without assessing and amending its IPRA guidance to clarify (1) the severity threshold criteria for referring allegations and (2) the IPRA reporting chain, BSEE risks further eroding organizational trust in the IPRA to carry out its mission to promptly and credibly respond to allegations or evidence of misconduct by BSEE employees. Since 2012, BSEE has begun several key strategic initiatives to improve its safety and environmental oversight. However, the bureau has made limited progress in implementing them. For example, BSEE’s Environmental Stewardship Initiative encompassed two simultaneous efforts to reduce environmental risks related to U.S. offshore oil and gas operations, but the efforts were partially overlapping, had ineffective coordination and communication, and produced few results. Without establishing a mechanism for BSEE management to obtain and incorporate input from bureau personnel and any external parties, such as Argonne, that can affect the bureau’s ability to achieve its objectives, BSEE’s risk-based inspection program is likely to experience continued delays and implementation problems. Likewise, since 2013 BSEE has begun several strategic initiatives to improve its internal management but has made limited progress in implementing them. Without a higher-level organization within Interior addressing leadership commitment deficiencies within BSEE, including by implementing internal management initiatives and ongoing strategic initiatives (e.g., ERM and performance measure initiatives) in a timely manner, the bureau is unlikely to succeed in implementing internal management initiatives, including its key strategic initiatives for ERM and performance measures, in a timely manner. Additionally, BSEE documentation identifies the need to enhance communication vertically and horizontally across the bureau, but it is unclear whether the bureau’s employee engagement initiative will address the lack of quality information that BSEE officials told us undermines trust across the organization or set expectations for sharing accurate and timely information as called for by BSEE’s Fiscal Year 2016- 2019 Strategic Plan. Without expanding the scope of its employee engagement strategy to incorporate the need to communicate quality information throughout the bureau, BSEE’s employee engagement initiative might not address the lack of quality information being communicated throughout the bureau that is exacerbating trust concerns. Further, BSEE’s IPRA pilot initiative did not address unclear and conflicting guidance that could undermine organizational trust in how the IPRA addresses allegations of misconduct. Without assessing and amending IPRA guidance to clarify (1) severity threshold criteria for referring allegations of misconduct to the IG and (2) its reporting chain, BSEE risks further eroding organizational trust in the IPRA to carry out its mission to promptly and credibly respond to allegations or evidence of misconduct by BSEE employees. In this report, we are making four recommendations. We recommend that the Secretary of the Interior direct the Assistant Secretary for Land and Minerals Management, who oversees BSEE, take the following two actions: Establish a mechanism for BSEE management to obtain and incorporate input from bureau personnel and any external parties, such as Argonne, that can affect the bureau’s ability to achieve its objectives. Address leadership commitment deficiencies within BSEE, including by implementing internal management initiatives and ongoing strategic initiatives (e.g., ERM and performance measure initiatives) in a timely manner. We also recommend that the Secretary of the Interior direct the BSEE Director take the following two actions: To address trust concerns that exist between headquarters and the field, BSEE should expand the scope of its employee engagement strategy to incorporate the need to communicate quality information throughout the bureau. To increase organizational trust in IPRA activities, BSEE should assess and amend IPRA guidance to clarify (1) severity threshold criteria for referring allegations of misconduct to the IG and (2) its reporting chain. We provided a draft of this report to the Department of the Interior for review and comment. In its written comments, reproduced in appendix I, Interior neither agreed nor disagreed with our four recommendations. Interior stated that the recommendations reflect ongoing BSEE commitments and that BSEE and Interior agree with the concepts laid out in the first three recommendations. For the fourth recommendation, Interior stated that BSEE will examine the current guidance for the Integrity and Professional Responsibility Advisor. However, Interior also stated that the draft report neither fully describes the progress made within BSEE nor fully represents the current status of the programs, initiatives, and activities highlighted therein. Interior requested that we consider information that it stated provides status updates and corrections, while also laying out in more detail BSEE's continuing commitments in these areas. Interior also enclosed additional documentation. We reviewed the additional information and documentation that Interior provided and found no evidence to support the revision of any of our findings. In turn, we disagree with Interior’s characterization of the progress that BSEE has made and believe that actions to implement our recommendations are necessary. Specifically: Regarding our recommendation that Interior develop a mechanism for BSEE management to obtain and incorporate input from bureau personnel and any external parties that can affect the bureau’s ability to achieve its objectives, Interior’s comments do not discuss any specific actions taken or underway to do so. Additionally, in its comments, Interior stated that regional personnel, such as regional managers and district managers, were involved throughout the development of the risk model and the pilot testing. To support this statement, Interior provided documentation of electronic communications from BSEE headquarters to senior regional leadership informing them of certain aspects of the program and meeting documents showing that certain regional officials attended meetings regarding program development. However, instead of demonstrating that regional managers were involved in the development of the model and methodology, this documentation demonstrates that regional officials raised concerns about the model and methodology but that headquarters officials said they would not make any changes in response to these concerns. Specifically, Interior provided e-mails that indicate headquarters informed regional officials of the development of the model through the Strategic Plan Implementation Team in late 2012. However, e-mails from early 2015 indicate that regional officials were not involved in the development of the risk model or risk-based inspection program methodology in the intervening more than 2 years, because they had to request information from headquarters about the underlying basis of the model and the methodology that they were being asked to comment on. After regional officials reviewed the methodology and the model, they e-mailed headquarters and raised concerns with the model. Headquarters officials replied that they validated the model and that changing the parameters of the model would decrease its effectiveness. Therefore, the e-mails that Interior provided support what regional personnel told us—that their input was not incorporated into the model and methodology prior to the first pilot. We found that regional personnel became more involved in the risk-based inspection initiative after the first pilot exposed deficiencies in headquarters’ approach. In its comments, Interior disagreed with our assessment of pilot test deficiencies and stated that BSEE expected to encounter issues while pilot testing. However, we believe that some of those issues may have been averted had BSEE included the input of regional officials earlier in the process. As described throughout the report across multiple initiatives, we have concerns about the fundamental working relationship between the region and headquarters, which are substantiated by the e-mails that Interior provided in response to our draft report. In turn, we continue to believe that Interior should develop a mechanism for BSEE management to obtain and incorporate input from bureau personnel and any external parties that can affect the bureau’s ability to achieve its objectives in the next risk-based inspection pilot test, which will be conducted by a joint headquarters-regional team in March 2017. However, as we discussed in the report, the first pilot had deficiencies in identifying high-risk offshore facilities, so the extent to which BSEE will be able to apply lessons learned is uncertain. Regarding our recommendation that Interior address leadership commitment deficiencies within BSEE, including implementing internal management initiatives (e.g., ERM and performance measure initiatives) in a timely manner, Interior’s comments do not discuss any specific actions taken to meet the intent of our recommendation. Interior states that BSEE's implementation of ERM is on target and that BSEE has an established ERM framework, completed its risk register, has a fully developed maturity model, has aligned enterprise and strategic risks with its strategic plan and has linked program risks with appropriate strategic risk categories, in addition to other activities. Interior also stated that BSEE is on schedule to complete its first full ERM cycle in March 2017. Additionally, Interior states that ERM is a relatively new program directed by a fall 2016 Office of Management and Budget Circular regarding ERM. However, while Circular No. A- 123 was revised in July 2016 with new ERM implementation requirements effective for fiscal year 2017, Interior’s statement is misleading because BSEE’s efforts have been ongoing since 2013. Additionally, Interior provided clarifications but does not dispute our findings on its efforts to develop performance measures. Specifically, Interior states that the November 2016 completion of a fiscal year 2016 Baseline Performance Measure Report represented the first step of implementation of BSEE’s performance measure program and that the bureau anticipates having an initial performance dashboard in fiscal year 2018. By considering November 2016 as the first step toward a performance measures program, Interior appears to disregard BSEE’s efforts over the prior 4 years. If BSEE succeeds in fiscal year 2018, this will be the culmination of 6 years of attempting to develop performance measures to inform management decision making. Therefore, we continue to believe that Interior should address leadership commitment deficiencies within BSEE, including by implementing internal management initiatives, such as ERM and performance measures, in a timely manner rather than revising initiative start dates. Regarding our recommendation that BSEE expand the scope of its employee engagement strategy to incorporate the need to communicate quality information throughout the bureau, Interior stated that BSEE is committed to enhancing communication and collaboration among its personnel and agrees with the importance strengthening communication between headquarters and the regions. Interior asserts that, since receiving our draft report, BSEE has completed assessment and analysis of employee feedback, and developed an engagement plan. However, Interior did not provide documentary evidence of this plan or what it entails. Moreover, in our report we identified a long history of poor communication between headquarters and regional officials, leading to a widespread lack of trust across the bureau. Without providing evidence of BSEE’s activities—and in light of the bureau’s documented struggles to effectively implement organizational change—we cannot confirm that any action has been taken and continue to believe that BSEE should expand the scope of its employee engagement strategy to incorporate the need to communicate quality information throughout the bureau. Regarding our recommendation that BSEE assess and amend IPRA guidance to clarify (1) severity threshold criteria for referring allegations of misconduct to the IG and (2) its reporting chain, Interior stated that the creation of the IPRA directly impacts trust concerns within BSEE and that the bureau will examine current guidance for the IPRA. However, Interior stated that contrary to our draft report, the Interior Department Manual already includes severity threshold criteria for referring allegations of misconduct to the IG. We believe that the language in the Interior Department Manual, which states that “serious allegations” and “serious complaints” should be referred to the IG, does not provide the specificity needed to adequately define the boundaries of IPRA responsibility. Additionally, Interior stated that the IPRA reports to the BSEE Director, consistent with the reporting chain established in the bureau’s organizational chart and the Interior Department Manual. However, the BSEE Director told us that, in practice, the IPRA often reports to the BSEE Deputy Director rather than the Director. Moreover, our work found that the decision making process of the IPRA Board—whereby the IPRA Board determines how to respond to an investigation without consultation with the Director—does not align with the IPRA’s prescribed reporting chain. As a result, we continue to believe that BSEE should assess and amend IPRA guidance to clarify (1) severity threshold criteria for referring allegations of misconduct to the IG and (2) its reporting chain. Interior’s comments on our draft report underscore our concerns regarding deficiencies in BSEE leadership commitment and support the decision to incorporate the restructuring of offshore oil and gas oversight into our High-Risk List in February 2017. Specifically: In its comments, Interior highlighted BSEE’s decision to contract with NAPA to evaluate the bureau—at a cost of approximately $450,000— as an example of the bureau’s commitment to maturing the organization. However, the timing, scope, and methodology cause us to question its value. Specifically, BSEE issued the contract five months after we began our review, and its scope—which includes identifying BSEE strategic and organizational initiatives and assessing their progress—mirrors our work already underway. Further, the contract stipulates that all work “shall be developed in a collaborative manner with BSEE leadership, but with a focus on document review rather than in-depth interviews with bureau personnel.” This calls into question the independence of the NAPA evaluation. Additionally, when we met with the NAPA team, they indicated that their instructions were to focus on BSEE headquarters and not conduct outreach to the bureau’s operational components in the regions. In our experience working with BSEE, we have found extensive outreach to the field to be essential to understanding the operations of the bureau. As a cumulative result of these factors, it is uncertain whether BSEE’s decision to engage in this evaluation will produce the organizational improvement advertised by Interior. Interior’s written comments contain factual errors that are contradicted by the evidence we collected in our work, further heightening our concerns regarding BSEE leadership’s commitment to taking the steps needed to improve the bureau. For example, Interior states that environmental risk was not a consideration in the Core Group objectives or final report. However, the Core Group’s final report states that the purpose of the report was “to assist in determining current and emerging environmental risks and whether BSEE has the best mitigation strategies in place.” Likewise, Interior states that BSEE’s Environmental Stewardship Collaboration Core Group and Argonne Environmental Risk Assessment were not simultaneous efforts. On the contrary, according to its final report, the Core Group convened from February 2016 to May 2016. The Statement of Work for the Argonne environmental risk assessment contract was issued in December 2015, and Argonne delivered its final report in July 2016. Interior also provided technical comments that we incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the individual named above, Christine Kehr, Assistant Director; Richard Burkard; Cindy Gilbert; Alison O’Neill; Matthew D. Tabbert; Barbara Timmerman; Kiki Theodoropoulos; and Daniel R. Will made significant contributions to this report.
On April 20, 2010, the Deepwater Horizon drilling rig exploded in the Gulf of Mexico. The incident raised questions about Interior's oversight of offshore oil and gas activities. In response, in May 2010, Interior reorganized its offshore oil and gas management activities, and in October 2011, created BSEE to among other things, develop regulations, conduct inspections, and take enforcement actions. In February 2011, GAO added the management of federal oil and gas resources to its High-Risk List. In December 2015, BSEE issued a strategic plan outlining initiatives to improve offshore safety and environmental oversight as well as its internal management. This report examines what efforts BSEE leadership has made in implementing key strategic initiatives to improve its (1) offshore safety and environmental oversight and (2) internal management. GAO reviewed laws, regulations, policies, and other documents related to the development of BSEE's strategic initiatives. GAO also interviewed BSEE officials. The Department of the Interior's (Interior) Bureau of Safety and Environmental Enforcement (BSEE) leadership has started several key strategic initiatives to improve its offshore safety and environmental oversight, but its limited efforts to obtain and incorporate input from within the bureau have hindered its progress. For example, to supplement its mandatory annual regulatory compliance inspections, in 2012, BSEE leadership began developing a risk-based inspection initiative to identify high-risk production facilities and assess their safety systems and management controls. During pilot testing in 2016, several deficiencies—including the usefulness of its facility risk-assessment model and unclear inspection protocols—caused BSEE to halt the pilot. According to bureau officials, during the development of the initiative, BSEE headquarters did not effectively obtain and incorporate input from regional personnel with long-standing experience in previous risk-based inspection efforts, who could have identified deficiencies earlier in the process. GAO previously found that when implementing large-scale management initiatives a key practice is involving employees to obtain their ideas by incorporating their feedback into new policies and procedures. Instead, BSEE leadership appears to have excluded the input of regional personnel by, for example, not incorporating input beyond the risk-assessment tool when selecting the first pilot facility, even though it was prescribed to do so in the bureau's inspection planning methodology. This undercut the pilot effort, raising questions about whether the bureau's leadership has the commitment necessary to successfully implement its risk-based program. Without higher level leadership within Interior establishing a mechanism for BSEE to obtain and incorporate input from personnel within the bureau, BSEE's risk-based inspection initiative could face continued delays. Similarly, since 2013, BSEE leadership has started several key strategic initiatives to improve its internal management, but none have been successfully implemented, in part, because of limited leadership commitment. For example, BSEE's leadership identified the importance of developing performance measures in its 2012-2015 strategic plan. BSEE began one of three attempts to develop performance measures in July 2014 by hiring a contractor to develop measures, but the bureau terminated this contract in January 2015 after determining a need to complete its internal reorganization before developing such measures. A second effort to develop performance measures started in December 2015, using the same consultant, and yielded 12 performance measures in March 2016, but BSEE did not implement them, in part, because data did not exist to use the measures. By the time BSEE received this consultant's report, it had already begun a third effort to internally develop performance measures; as of November 2016 had identified 17 draft performance measures, but BSEE leadership missed repeated deadlines to review them. BSEE officials told GAO that after leadership approval, the bureau plans to pilot these measures and develop others. BSEE leadership has not demonstrated continuing oversight and accountability for implementing internal management initiatives, as evidenced by its limited progress implementing key strategic initiatives. Without higher-level oversight within Interior addressing leadership commitment deficiencies within BSEE, the bureau is unlikely to succeed in implementing internal management initiatives. GAO is making four recommendations, including that higher-level leadership within Interior (1) establish a mechanism for BSEE management to obtain and incorporate input from bureau personnel that can affect the bureau's ability to achieve its objectives and (2) address leadership commitment deficiencies within BSEE, including by implementing internal management initiatives and ongoing strategic initiatives in a timely manner. Interior neither agreed nor disagreed with our recommendations.
Prior to entering GME training, students complete medical school under one of two broad educational philosophies—allopathic or osteopathic. Allopathic physicians represent the majority of physicians and have a Doctor of Medicine (known as an M.D.). In contrast, osteopathic physicians have a Doctor of Osteopathic Medicine (known as a D.O.). Osteopathic medicine is based on a philosophy that emphasizes a “whole-person approach,” and physicians receive specific training in manipulating the musculoskeletal system. Following medical school, GME provides the clinical training required for a physician to be eligible for licensure and board certification to practice medicine independently in the United States. Residents in GME programs train, usually in a hospital or clinic, under the direct or indirect supervision of physician faculty members. Historically, ACGME has accredited GME programs focused on allopathic training, and they are available to U.S. or Canadian medical school graduates with a M.D., U.S. medical school graduates with a D.O., and international medical graduates with the equivalent of an M.D. degree from foreign medical schools. GME programs focused on osteopathic training are accredited by AOA and available only to U.S. medical school graduates with D.O. degrees. While ACGME and AOA programs are accredited separately, ACGME and AOA may also jointly accredit programs, and the organizations are in the process of establishing a single accreditation system. On June 30, 2020, AOA is to cease accreditation activities and all GME programs are to be accredited by ACGME, and physicians will be able to attend any accredited program regardless of their type of degree. Whether in ACGME or AOA programs, physicians may pursue GME training within a variety of specialties or subspecialties. Initially, physicians go through GME training for a specialty—such as internal medicine, family medicine, pediatrics, anesthesiology, radiology, or general surgery. Of the specialties, family medicine, internal medicine, and pediatrics are considered primary care specialties, as they provide comprehensive first contact and continuing care for persons with a broad range of health concerns. Completion of training in one of these specialties qualifies a physician to seek initial board certification to practice medicine. Some residents, however, may choose to subspecialize and seek additional GME training. For example, a physician who completed an internal medicine GME program may decide to subspecialize in cardiology. (See fig.1.) The percentage of residents who later subspecialize varies based on specialty type. For example, according to a 2008 study, among the primary care specialties, 5 percent of family medicine residents end up subspecializing, compared with 55 and 39 percent of internal medicine and pediatric residents, respectively. Therefore, a resident training in a primary care specialty may not ultimately practice as a primary care physician. Studies of the physician workforce and associated shortages vary in the assumptions they make about physician supply and demand, resulting in different conclusions about the nature and extent of shortages. However, there continue to be shortages of physicians, such as in primary care, and studies have found that rural areas may be more likely to experience shortages. For example, one study reported that while there are about 80 primary care physicians per 100,000 people in the United States, the average in rural areas is 68 per 100,000. At the federal level, HRSA monitors the supply of and demand for health care professionals and disseminates data about workforce needs and priorities. With regard to primary care physician shortages, HRSA identified 1,401 geographic and 1,485 population HPSAs across the United States, as of January 2017. While these HPSAs were located in nearly all 50 states and the District of Columbia, the South had the most (42 percent), and the Northeast had the fewest (8 percent). According to HRSA, 8,583 more primary care physicians would be needed to remove these HPSA designations. In a November 2016 report, HRSA also projected that the need for more primary care physicians is likely to continue, with a shortage of up to 23,640 physicians by 2025 (while family medicine and internal medicine had projected shortages, nationally, pediatrics did not). HRSA projected primary care physician shortages in all regions of the country, but noted that the South was expected to have the largest shortage and the Northeast the smallest shortage. In addition to shortages for primary care physicians, HRSA has also recently projected shortages for other physicians, such as psychiatrists and those in certain surgical specialties and internal medicine subspecialties. While increasing physician supply through more GME training opportunities is one way to prevent shortages, experts have also suggested other options. For example, federal payment incentives have been used to encourage physicians to practice in underserved areas, and changes to physician payment could also encourage physicians to select certain specialties. In addition, experts have suggested that access to care could be increased through the greater use of telehealth and other technology, as well as changing the pattern of practice, such as by providing more team-based care or by eliminating care that is either unnecessary or of limited value. Particularly with regard to primary care, experts have also suggested that some health care needs may be filled through the greater use of other types of providers, such as nurse practitioners and physician assistants. (See app. I for a summary of selected federal efforts related to nurse practitioner and physician assistant training.) The federal government invests significantly in GME training. The vast majority of federal funding is distributed by HHS through CMS’s Medicare program, although HHS also provides GME funding through other programs, such as CMS’s Medicaid program and grant programs administered by HRSA. Other sources of federal funding include GME training that occurs at VA and Department of Defense medical facilities. With regard to Medicare, CMS makes GME payments to different types of institutions, though hospitals paid through the inpatient prospective payment system account for most of these payments. Medicare pays for a hospital’s costs associated with GME training through two mechanisms—direct graduate medical education (DGME) and indirect medical education (IME) payments—both of which are formula-based payments set by statute. DGME payments are made to cover the direct costs of GME training, such as residents’ salaries. IME payments are made to reflect the higher patient care costs hospitals with GME programs may incur, because, for example, residents in training may order more diagnostic tests and procedures than experienced clinicians and take more time to interpret the results. The Balanced Budget Act of 1997 capped the number of residents that hospitals may count for DGME and IME payment at the number of full-time equivalent (FTE) residents in place in 1996. If a hospital did not have GME training in 1996, its resident FTE caps are set 5 years after it begins training residents in its first new program, based in part on the highest number of resident FTEs who train during that fifth year. While hospitals are free to add residents beyond their caps, these generally do not result in additional Medicare payments. With regard to VA, the agency estimated that in fiscal year 2015, more than 44,000 residents rotated through a VA facility as part of their GME training. Nearly all of the GME programs utilizing VA as a training site are sponsored by an affiliated medical school or teaching hospital, rather than by VA. VA provides payments to its medical centers for GME training and, in turn, medical centers reimburse their academic affiliates for the time residents spend training there. Over time, a number of federal efforts have been created that provide funding to increase GME training in rural areas and in primary care. For example, the Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 included provisions that could expand GME training in rural areas by allowing certain qualified hospitals to increase their resident FTEs above the 1996 caps. PPACA included several provisions intended to increase primary care GME training. This included two grant programs overseen by HRSA—the Teaching Health Center program and Primary Care Residency Expansion program. In addition, PPACA required that Medicare-funded resident FTEs not being used by hospitals be redistributed to other hospitals meeting certain requirements. Similarly, the Veterans Access, Choice, and Accountability Act of 2014 directed VA to expand its GME training, according to certain priorities, including primary care. Some of the efforts focused on expanding primary care also prioritized rural or underserved areas designated as HPSAs. (See table 1 for additional information on these federal rural and primary care efforts.) From 2005 through 2015, 99 percent of residents in GME programs trained in urban areas, despite some growth in rural areas. Overall, during this period, the number of residents grew by 22 percent—from 104,330 to 127,578. While there was growth in rural areas, urban areas added a greater number of residents—almost 23,000 (from 103,526 to 126,355), compared with 419 for rural areas (from 804 to 1,223). Residents in GME training were also concentrated in the Northeast, although their rate of growth was higher in other regions of the country. From 2005 through 2015, the largest percentage of residents (about 30 percent) trained in the Northeast, which also had the smallest percentage of the U.S. population (about 18 percent). In 2015, the Northeast had 69 residents per 100,000 people. In comparison, the South also had about 30 percent of residents in 2015, but because it was the region with the largest population (about 38 percent), it had only 31 residents per 100,000 people. While the Northeast maintained the highest concentration of residents from 2005 through 2015, the resident growth rate was higher in the other regions. Specifically, the number of residents increased by 15 percent in the Northeast, compared with 24 percent or more in each of the other regions. However, resident growth in the South and West was somewhat tempered by the fact that these regions also experienced higher population growth over the period, resulting in smaller increases in the number of residents per 100,000 people compared with the Midwest and Northeast. (See fig. 2.) Within these broad regions, residents in GME training were further concentrated in specific counties. Of the 3,143 counties in the United States, in 2015, residents were located in 444 counties. However, about half of residents were located in 31 counties. (See fig. 3.) These counties accounted for 18 percent of the U.S. population and were located in each of the four regions. In the Midwest, for example, 65 percent of the residents in that region trained in 11 counties. From 2005 through 2015, the overall number of counties with residents increased by 15 percent (387 to 444). While residents were no longer in 18 counties, 75 new counties gained residents. Of these 75 new counties, most were located in the South (36) and the West (19). GME training for some residents also occurred in HPSAs—areas of the country designated as having a shortage of primary care physicians. For example, in 2015, 33 percent (17,360) of all primary care residents trained in a HPSA. The South had the highest percentage of its residents in HPSAs (39 percent of 15,385 residents), and the Midwest and West had the lowest (28 percent of 12,738 residents for the Midwest, and 28 percent of 8,348 residents for the West). In addition, 89 percent of the residents in a HPSA trained in a population HPSA—an area designated as having a shortage for a specific population—compared with 11 percent who trained in a HPSA with a shortage in the entire geographic area. Though many primary care residents trained in a HPSA, these residents were located in less than 8 percent of the 2,685 HPSAs nationwide. The trends presented here generally represent the location of GME primary training sites and may not fully account for the clinical experiences residents receive during their training. According to ACGME officials, GME training must be based in areas that can support requirements for accreditation, including adequate patient volume and teaching quality. As a result, it tends to be primarily located in certain areas, such as urban areas. However, officials said that residents training in urban areas may rotate to participating sites in rural areas or treat patients from surrounding rural areas at urban training sites. We were not able capture such experiences, in part because neither ACGME nor AOA collect data on the extent to which residents train at participating sites. From 2005 through 2015, most residents in GME programs were training in one of the specialties that all residents are required to complete before choosing whether to practice or subspecialize. For example, in 2015, over 80 percent of all residents were training in a specialty, with the remaining residents training in a subspecialty. Of the residents training in a specialty, 45 percent trained in one of the primary care specialties. More than half of these primary care residents trained in internal medicine. As previously noted, research has shown that many primary care residents who trained in internal medicine go on to subspecialize rather than practice in primary care. (See fig. 4.) Growth occurred for residents in GME specialty programs, but their percentage of overall residents went down slightly (from 85 to 83 percent), as subspecialty growth was more than twice as fast. Specifically, from 2005 through 2015, the number of residents in a specialty grew by 19 percent (from 88,976 to 105,796) compared with 42 percent (from 15,354 to 21,782) for subspecialties. For both specialties and subspecialties, the number of residents grew more in urban areas than in rural areas, but the rate of growth was higher in rural areas, particularly for specialties (53 percent compared with19 percent for specialties, and 47 percent compared with 42 percent for subspecialties). In 2015, residents in specialties accounted for about 90 percent (or 1,116) of the 1,223 residents training in rural areas. The regional distribution of residents in medical specialties and subspecialties reflected overall trends, with both concentrated in the Northeast, but growing faster in other regions from 2005 through 2015. When looking at primary care specialties specifically, from 2010 through 2015, the years for which we had comparable data, the number of residents in a GME primary care specialty program increased slightly more than all other specialties (11 percent vs. 9 percent). Among the primary care specialties, residents in family medicine grew by 15 percent (from 10,120 to 11,686), compared with 10 percent for internal medicine (from 22,990 to 25,354) and 9 percent for pediatrics (from 8,106 to 8,814). The vast majority of residents trained in GME programs accredited by ACGME, accounting for more than 90 percent of all residents from 2005 through 2015. As a result, residents in ACGME programs accounted for 84 percent of the growth in the number of residents over the time period. However, there was a higher rate of growth for residents in AOA programs. Specifically, the number of residents in ACGME programs increased by 19 percent, from 100,918 to 120,497. In comparison, the number of residents in AOA programs increased by 108 percent, from 3,412 to 7,081. As a result, the percentage of all residents in AOA programs grew from 3 percent to 6 percent. (See fig. 5.) Residents in ACGME and AOA programs both experienced growth in areas with historically less GME training, though residents in AOA programs were responsible for most of the growth in rural areas, as summarized below. While the largest number of residents in ACGME programs trained in the Northeast from 2005 through 2015, the largest number of residents in AOA programs trained in the Midwest. (See fig. 6.) However, the South had the highest percentage of growth in residents in ACGME programs over the time period, and the West had the highest growth in residents in AOA programs. From 2005 through 2015, almost all residents in ACGME and AOA programs trained in urban areas, although both types of residents experienced growth in rural areas. Despite their smaller number, residents in AOA programs accounted for 70 percent of this growth. Specifically, the number of residents in AOA programs in rural areas increased by 291 (from 100 to 391), while number of residents in ACGME programs increased by 128 (from 704 to 832). From 2010 through 2015, the years for which we had comparable data, the number of residents in primary care GME programs increased for both ACGME and AOA. The number of residents in ACGME primary care programs increased from 41,386 residents in 2010 to 44,563 residents in 2015. For residents in AOA primary care programs, the number increased from 1,235 residents in 2010 to 2,747 residents in 2015. Hospitals’ use of three Medicare payment incentives—the primary federal efforts we identified intended to increase GME training in rural areas— was often limited. In part, this is because relatively few rural hospitals reported having GME training—49 (7 percent) compared with 1,144 (26 percent) hospitals overall, in fiscal year 2013. For those rural hospitals that did have GME training, there are two incentives that allow them to expand the number of resident FTEs they can count for Medicare payments, compared with what is allowed for urban hospitals. The first incentive sets resident FTE caps for rural hospitals that had GME training in 1996 at 130 percent of the resident FTEs at the hospital that year. Data indicate that, where applicable, rural hospitals are likely to use this incentive, as most of the 49 rural hospitals reported resident FTEs close to or above their resident FTE caps. However, use of the second incentive was more limited. Specifically, less than half of the 49 hospitals reported data indicating that they used the incentive that allows them to increase their resident FTE caps if they have a GME program in one specialty, for example, family medicine, but start a new GME program in another specialty, for example, internal medicine. The third incentive encourages urban hospitals to start Rural Training Track programs—where residents spend a portion of their GME training in a rural area—by allowing a specific resident FTE cap adjustment for that Rural Training Track if the program meets certain requirements. While CMS was not able to provide information on the number of hospitals that have used the incentive, evidence suggests it may be limited. Specifically, one outside expert we spoke with said that based on communications with Rural Training Track programs, only two urban hospitals claimed an increase to their resident FTE caps for starting such programs from 2000 through 2010. In addition to the general challenges associated with offering GME training in rural areas, CMS officials reported a number of challenges with using Medicare funding to support rural GME training. For example, officials told us that the way a hospital’s resident FTE caps are established, as well as the amount it is paid per-resident for DGME, may make rural hospitals hesitant to partner with urban hospitals to provide GME training. Officials said some rural hospitals had their caps or per- resident amount set at low amounts because they served as a rotational site for an urban hospital for a small amount of GME training, and then later faced challenges in starting a full GME program, which is likely more costly. CMS officials also identified challenges for urban hospitals wanting to use the Medicare payment incentive for starting Rural Training Track programs. For example, some urban hospitals have formed a GME training program with rural hospitals that mirrors the structure of Rural Training Track programs, but the program does not meet all the statutory requirements for the Medicare payment incentive, such as being separately accredited. Other urban hospitals have faced challenges if they have started a Rural Training Track program with one rural site and later wanted to expand the program to another rural site, as they are not eligible to receive an additional increase to their caps for the expansion to the second site. We identified four federal efforts intended to increase primary care GME training. While all these efforts funded new primary care residents, some also funded residents in other specialties or subspecialties, as summarized below. (See table 2 for additional information on residents funded by the four efforts.) Residents funded under the Teaching Health Center program—which provided GME payments to primary care medical and dental GME programs in community-based settings—grew from 76 in 2012 to 600 in 2015. While the program included other specialties in its definition of primary care, in a given year, about 90 percent of funded residents were in family medicine, internal medicine and pediatrics. As of the end of 2016, 384 residents funded through the program had completed their GME training. Residents under the Primary Care Residency Expansion program— which provided grants to primary care GME programs—increased from 168 in 2012 to 500 in 2015. As of the end of 2016, 489 residents funded through the program had completed their GME training. The VA GME expansion—which aims to increase VA GME training by up to 1,500 positions over 10 years—distributed 366 FTEs for new residents to VA medical centers for 2016 and 2017. Of these, 163 were in primary care. The Medicare GME redistribution transferred 599 unused IME and 692 unused DGME resident FTEs to other hospitals, effective July 1, 2011. Hospitals that received increases to their resident FTE caps were, for a 5-year period, required to use 75 percent of the redistributed FTEs to support new primary care or general surgery residents as well as meet a 3-year primary care average. In fiscal year 2013, data from 40 of the 55 hospitals that received cap increases indicate that over 85 percent of the redistributed FTEs used were for either primary care or general surgery residents. The four federal efforts varied in the extent to which they funded family medicine residents versus residents in other primary care specialties. This could affect how many of the residents trained ultimately practice in primary care, as family medicine residents are less likely to subspecialize. In particular, while about half of the primary care residents funded through the Teaching Health Center and Primary Care Residency Expansion programs in 2015 were in family medicine, family medicine residents accounted for 7 percent of the FTEs distributed through the VA GME expansion in 2016 and 2017. VA officials told us that family medicine GME programs have tended to focus their training in non-VA settings, in part, because the VA has no pediatric patients and has historically provided very little women’s health care, which is not aligned with some of family medicine’s GME training requirements. However, VA is working to build partnerships with family medicine programs as part of the GME expansion. To varying degrees, the four federal efforts funded residents in regions of the country that have disproportionally low numbers of residents relative to the population. All four efforts added residents outside of the Northeast—where the resident-to-population ratios are the highest. However, while the Medicare GME redistribution and VA GME expansion efforts added over 90 percent of their FTEs to regions outside of the Northeast, the regional distribution of residents funded by the Teaching Health Center and Primary Care Residency Expansion programs in 2015 reflected national trends, with the most funded residents in the Northeast. (See table 3.) Though funding GME training in HPSAs and in rural areas was not a priority across the four federal efforts, they did provide funding for residents in such areas, and at greater rates than was the case for where residents train overall. However, this varied depending on the effort, as outlined below. The percentage of residents funded in a HPSA ranged from 34 percent of all primary care FTEs distributed through the VA GME expansion to nearly 60 percent of all FTEs transferred through the Medicare GME redistribution; compared with the 33 percent of all primary care residents that trained in a HPSA in 2015. Similar to primary care GME training overall, residents funded by the efforts were far more likely to train in a population HPSA than in a geographic HPSA. The percentage of residents funded in rural areas ranged from just over 2 percent of residents funded through the Primary Care Residency Expansion program to just over 5 percent of the residents funded through the Teaching Health Center program, which was higher than the 1 percent of all residents that trained in rural areas in 2015. The small number of funded residents training in rural areas could in part be due to the number of rural GME programs or providers that applied for funding. For example, based on our analysis, just 3 rural hospitals applied for the Medicare GME redistribution, each of which received new FTEs. Officials from CMS, HRSA, and VA cited the challenges of increasing GME training in rural areas. For example, HRSA officials said that rural areas may have difficulty meeting accreditation requirements for certified faculty members or volume of services, and CMS officials noted they may face challenges attracting residents. HRSA officials also noted that it may be hard for rural hospitals to fund the significant costs associated with building infrastructure to start or expand GME training. VA officials said that, partially in response to this challenge, the agency awarded separate planning and infrastructure grants through the GME expansion to help sites with very little or no prior GME training. Officials noted that as the expansion is only in the first few years of its 10-year timeline, these grants are just being awarded and new partnerships are just being formed. As a result, officials said that it will take time for such investments to pay off and for new VA GME programs to be built in rural and underserved areas. While the four federal efforts are not all planning to report on whether residents funded ultimately practice in primary care, or in a rural or underserved area after completing GME training, evidence from the Teaching Health Center and Primary Care Residency Expansion programs suggests that many of their residents will. According to HRSA, of the 47 Teaching Health Center residents completing training in 2014, over 80 percent were practicing in primary care, 44 percent in a medically underserved area, and 15 percent in a rural area. Additionally, among the 156 Primary Care Residency Expansion residents who completed training in 2014, 67 percent intended to practice in a primary care setting, 46 percent intended to practice in a medically underserved area, and 17 percent intended to practice in a rural setting. Despite increasing the number of primary care residents, the four federal efforts still represent a relatively small investment in primary care GME training when compared with overall federal GME spending and the number of primary care residents nationally. Specifically, average annual funding for the Teaching Health Center program, the Primary Care Residency Expansion program, and the VA GME expansion accounted for less than 1 percent of the more than $15 billion in estimated annual federal spending on GME training, and the Medicare GME redistribution accounted for less than 1 percent of the approximately 79,000 IME and 83,000 DGME FTEs that hospitals reported being able to claim for Medicare payment in fiscal year 2013. In addition, we estimate that the residents added through these efforts so far would have represented about 3 percent of the nearly 50,000 primary care residents trained across the country in 2015. Moreover, some of the new primary care GME training added through the federal efforts may not continue, in part because the Primary Care Residency Expansion program and the Medicare GME redistribution were one-time efforts that have ended. According to HRSA officials, just over 70 percent of the 74 Primary Care Residency Expansion awardees responding to a request for information (a 96 percent response rate) reported that they would continue to support all of the added primary care resident positions after the grant period, and an additional 7 percent planned to sustain at least some of the positions. However, for a variety of reasons, including lack of funding, 19 percent of awardees planned to revert to their prior number of residents and 3 percent were unsure of their plans. IME and DGME FTEs received through the redistribution became a permanent part of a hospital’s resident FTE caps. However, as the requirement to use 75 percent of the redistributed FTEs for new primary care or general care surgery residents expired in June 2016, hospitals that added primary care residents as a result of the redistribution may not continue to do so. In addition, while the Teaching Health Center program and the VA GME Expansion are ongoing, they may face challenges in sustaining funded residents or adding more primary care GME training. Without reauthorization, funding for the Teaching Health Center program will end in fiscal year 2017. HRSA officials said that Teaching Health Center awardees may have difficulty sustaining residents added without continuation of the grant program, in a large part because they are community health centers that do not have other revenue lines to support GME training. For example, they said that a recent decrease in Teaching Health Center funding resulted in awardees training fewer residents than they originally projected, and some awardees reported they may not be able to continue GME programs. For VA’s GME expansion, as of June 2016, about 12 percent of the 163 primary care FTEs distributed to VA medical centers remained unoccupied, which VA officials said could be because, for example, the GME training programs were not ready to use them. Further, the percentage of primary care FTEs distributed through the expansion fell from 51 in 2016 to 37 in 2017. VA officials said that the drop could be because of pent-up demand in the first year of funding, but also reported challenges forming new partnerships with academic affiliates in primary care specialties, such as those in family medicine. However, VA officials also noted that they are actively pursuing such partnerships, and it could take 5 to 7 more years to determine the extent to which they are successful at producing new resident positions. Given that these four federal efforts represent a relatively small investment in primary care GME training, and that most other federal funding cannot be targeted to primary care, the efforts may not be sufficient to meet projected primary care physician shortages. In November 2016, HRSA projected a shortage of up to 23,640 primary care physicians by 2025—although it indicated that changes in health care delivery, such as greater use of nurse practitioners and physician assistants, could affect the extent of that shortage. It also noted substantial regional variation, with the South facing the largest shortage and the Northeast having almost no shortage. HRSA officials told us their projections did not account for the primary care efforts we identified. When asked the extent to which the efforts might mitigate the shortages, officials said that the efforts could help to lessen them, particularly if the ongoing efforts are fully supported with sustainable, long-term funding. For example, they said that if the Teaching Health Center program were able to continue to support the same number of resident FTEs as are currently approved, it would produce approximately 3,000 new primary care physicians by 2025. However, officials acknowledged that the efforts account for a small percentage of overall primary care residents being added to the physician workforce. As most other federal GME funding comes from Medicare, which HHS generally distributes according to statutory formulas unrelated to workforce needs, it is unlikely that other federal funding sources will add the additional primary care positions needed to fill the gap. In a 2015 report, we recommended that, to ensure that HHS workforce efforts meet national needs, HHS should develop a comprehensive and coordinated planning approach to guide its health care workforce development programs. In making this recommendation we noted that, without such planning, HHS cannot fully identify the gaps between existing programs and national needs as well as the actions needed to address these gaps. In response, HHS concurred with our recommendation and, among other things, noted that it could convene an interagency group to assess such things as gaps in workforce programs and potential requests to the Congress for modified or expanded legislative authority. As of December 2016, HHS officials indicated that the agency had not taken steps to convene this group, but said that it is still considering doing so in the future. In light of the limited nature of current federal efforts to increase primary care GME training, we continue to believe that our recommendation should be implemented and agree that an interagency group like the one proposed by HHS is an important first step toward ensuring a more comprehensive and coordinated approach to workforce planning. We provided a draft of this report to HHS and VA for comment. HHS provided technical comments, which we incorporated as appropriate. We also received emailed comments from VA through an analyst in its Office of Congressional and Legislative Affairs that reiterated our findings on the number of GME primary care positions added under the Veterans Access, Choice, and Accountability Act of 2014. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix II. With the projected shortage of primary care physicians and challenges of recruiting physicians to rural areas, some experts have suggested that providers other than physicians could help address gaps in care. Nurse practitioners—one type of advanced practice registered nurse—and physician assistants are two examples of such providers. Both have completed graduate-level education and, depending on the specialty they choose, are trained to deliver a wide range of care, including primary care. While nurse practitioners and physician assistants can furnish some of the care provided by physicians, the extent to which they can provide this care independently varies based on state laws. For example, according to information from the American Association of Nurse Practitioners, state laws differ in how much they allow nurse practitioners to provide patient services independently, with 22 states and the District of Columbia allowing them to practice completely independently, and the remaining 28 states imposing varying restrictions, such as supervision by physicians when performing certain services. Physician assistants traditionally practice under the supervision of physicians to some degree, but according to a National Governors Association study, while most states allow physicians to determine the medical tasks they delegate to physician assistants and the appropriate level of supervision, some have more explicit requirements. In November 2016, the Department of Health and Human Services’ Health Resources and Services Administration (HRSA) reported that although it projects a shortage of primary care physicians by 2025, under current utilization rates, the national supply of primary care nurse practitioners and physician assistants is expected to outpace demand. HRSA also projected that these trends will vary by region, and it expects an oversupply of these providers to be greatest in some of the regions facing a shortage of primary care physicians, such as the South. HRSA noted that more effective integration of nurse practitioners and physician assistants into health care delivery could help mitigate physician shortages in primary care. Similar to how we identified and described federal efforts to increase graduate medical education in primary care or rural areas, we also identified federal efforts to increase the number of nurse practitioner and physician assistant trainees in primary care—which we defined as family medicine, internal medicine, and pediatrics—or in rural areas. Specifically, we reviewed government and nongovernmental reports on funding for health care workforce training and interviewed officials from the Department of Health and Human Services and the Department of Veterans Affairs (VA). We limited our selection to efforts that provided funding directly to training programs within the civilian health care system and that were ongoing as of January 2016. As such, we excluded efforts that provided payments directly to individual trainees, including loan repayment and scholarship programs. We also excluded efforts that were not primarily focused on increasing the number of trainees in clinical training. For example, we excluded efforts that focused primarily on curriculum enhancement for existing training programs or training for leadership, management, and research skills. Through this review, we identified three efforts within HRSA that met our criteria. One of these efforts was focused on physician assistants and the other two focused on nurse practitioners. (See table 4.) For each effort, we obtained and analyzed HRSA data to determine funding and trainee levels. To determine the reliability of the data we reviewed, we interviewed agency officials and checked for outliers and obvious errors to test the internal consistency and reliability of the data. After taking these steps, we determined that the data were sufficiently reliable for the purposes of our reporting objectives. We also interviewed HRSA officials and reviewed agency annual reports for further information about the efforts. In this appendix, references to years are to academic years unless otherwise noted. Overall, we found that the amount of funding and trainees funded per awardee varied across the efforts. For example, the Expansion of Physician Assistant Training and the Advanced Nursing Education Expansion had a smaller number of applicants awarded funding and trainees funded and a larger average amount of funding per awardee than the Advanced Education Nurse Traineeship. (See table 5.) According to HRSA, while the median traineeship award for the Expansion of Physician Assistant Training and the Advanced Nursing Education Expansion during 2014 was $22,000, the median award for the Advanced Education Nursing Traineeship was $7,390. As part of its annual reporting, HRSA examined the extent to which graduates of these programs practiced in primary care, rural areas, or medically underserved areas after completing their training. According these reports, though the number of 2013 Expansion of Physician Assistant Training graduates was smaller, they were more likely than 2013 graduates of the two nurse practitioner programs to be practicing in primary care or rural or medically underserved areas one year after graduating. (See table 6.) HRSA reported that funding for trainees in the Expansion of Physician Assistant Training and the Advanced Nursing Education Expansion ended in 2016, while the Advanced Education Nursing Traineeship was ongoing at the time of our review. When asked about the sustainability of the efforts without federal funding, HRSA officials referenced a recent study, which found that of 22 Expansion of Physician Assistant Training grantees responding to a survey, 82 percent planned on maintaining all expanded positions; 4 percent intended to maintain a portion of the expanded positions; and 14 percent intended to revert to their previous training levels. HRSA officials said that they would anticipate similar levels of sustainability for the Advanced Nursing Education Expansion. In addition to the contact named above, William Hadley, Assistant Director; Rachel Svoboda, Analyst-in-Charge; Toni Harrison; and Daniel Lee made key contributions to this report. Also contributing were Samuel Amrhein, Emily Flores, Vikki Porter, and Jennifer Whitworth.
A well-trained physician workforce adequately distributed across the country is essential for providing Americans with access to quality health care services. While studies have reached different conclusions about the nature and extent of physician shortages, the federal government has reported shortages in rural areas and projects a deficit of over 20,000 primary care physicians by 2025. GME training is a key factor affecting the supply and distribution of physicians. Federal funding for this training is significant, more than $15 billion per year, according to the Institute of Medicine. Given the federal investment and concerns about physician supply, GAO was asked to review aspects of GME training. This report describes (1) changes in number of residents in GME training by location and type of training from academic years 2005 through 2015, (2) federal efforts intended to increase GME training in rural areas, and (3) federal efforts intended to increase GME training in primary care. To determine changes in the locations and types of residents in GME training, GAO analyzed resident data from the accrediting bodies overseeing GME training. To identify and describe relevant federal efforts, GAO also reviewed federal laws, reports, and data, and interviewed agency officials. The locations and types of physicians in graduate medical education (GME) training—known as residents—generally remained unchanged from 2005 through 2015, but there was notable growth in certain areas. As shown in the table below, residents in GME training remained concentrated in the Northeast. At the same time, the number of residents grew more quickly in other regions, though this was somewhat tempered by regional population growth. Residents also remained concentrated in urban areas, which continued to account for 99 percent of residents, despite some growth in rural areas. From 2005 through 2015, over 80 percent of residents were receiving training in a medical specialty, which is required for initial board certification. In 2015, nearly half of these residents were in a primary care specialty (internal medicine, family medicine, and pediatrics), versus other specialties, such as anesthesiology. While this represented a slight increase from 2010, research has shown that many primary care residents will go on to receive additional GME training in order to subspecialize, rather than practice in primary care. Subspecialty training accounted for less than 20 percent of residents from 2005 through 2015, though the number of residents in subspecialties grew twice as fast as for specialties. GAO found that the primary federal efforts intended to increase GME training in rural areas were incentives within the Medicare program that can provide hospitals with higher payments for such training. However, hospitals' use of these incentives was often limited, and certain Medicare GME payment requirements could present barriers to greater use. GAO identified four federal efforts intended to increase primary care GME training. Each effort added new primary care residents and provided funding in areas of the country with disproportionally low numbers of residents or physicians, though to varying degrees. The four efforts accounted for a relatively small percentage of primary care residents and overall federal GME funding, about 3 percent and less than 1 percent, respectively. In addition, the extent to which the residents added by these efforts will be maintained or continue to grow is uncertain, in part because federal funding for some of the efforts has ended. As a result, the efforts may not be sufficient to meet projected primary care workforce needs. Further, GAO recommended in 2015 that the Department of Health and Human Services (HHS) develop a comprehensive and coordinated plan for its health care workforce programs, which is critical to identifying any other efforts necessary to meet these needs, and has not yet been implemented. GAO continues to believe that action is needed on a 2015 recommendation for HHS to develop a plan to guide its health care workforce programs. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
Rule of law reform must take place within China’s legal and political system, and any assessment of rule of law development should be judged in the context of Chinese institutions. China’s current legal system is relatively new and is based, to a great extent, on the civil law codes of Germany as adopted by Japan, and, to some extent on the legal institutions of the former Soviet Union and China’s traditional legal system. Two important characteristics of Chinese legal development since 1949 have been the subordination of law to Communist Party policy and the lack of independence of the courts. Another characteristic is the large number of legal measures used to implement a law, including administrative regulations, rules, circulars, guidance, Supreme People’s Court interpretations, and similar local government legal measures. China’s central government laws, regulations, and other measures generally apply throughout China. Although local governments enact laws and regulations, these must be consistent with central government measures. In 1996, a number of China’s top leaders emphasized the principle of administering the country in accordance with law. Several years later, China amended its constitution to incorporate this principle. A substantial number of the many commitments that China has made to the WTO can be characterized as related to developing rule of law practices. In a broad sense, China’s WTO commitments suggest that in its commercial relations China is on the way to becoming a more rules-based society, contingent on the faithful implementation of its WTO accession agreement. This agreement is highly detailed and complicated, running to over 800 pages including annexes and schedules. It is the most comprehensive accession package for any WTO member. As part of this package, China agreed to ensure that its legal measures would be consistent with its WTO obligations. About 10 percent of the more than 600 commitments that we identified in China’s accession package specifically obligate China to enact, repeal, or modify trade-related laws and regulations. These commitments cover such trade policy areas as agricultural tariff-rate quotas, export and import regulation, technical barriers to trade, intellectual property rights, and nondiscrimination. In addition, by becoming a WTO member, China has agreed to abide by the underlying WTO agreements, such as the General Agreement on Tariffs and Trade, the General Agreement on Trade in Services, the Agreement on Trade-Related Aspects of Intellectual Property Rights and the Understanding on the Rules and Procedures Governing the Settlement of Disputes. China also has made a substantial number of important, specific commitments in the rule of law-related areas of transparency, judicial review, uniform enforcement of legal measures, and nondiscrimination in its commercial policy. In the area of transparency, China has agreed to designate an official journal for publishing trade-related laws and regulations and to provide a reasonable period for public comment before implementing them. China has also agreed to designate an enquiry point where individuals, business enterprises, and WTO members can request information relating to these published laws and regulations. Transparency requirements and commitments to report information to the WTO together represent about a quarter of the commitments we identified in China’s accession package. In the area of judicial review, China has agreed to establish or designate tribunals to promptly review trade-related actions of administrative agencies. These tribunals are required to be impartial and independent of the administrative agencies taking these actions. In the area of uniform enforcement, China has agreed that all trade-related laws and regulations shall be applied uniformly throughout China and that China will establish a mechanism by which individuals and enterprises can bring complaints to China’s national authorities about cases of nonuniform application of the trade regime. Finally, in the area of nondiscrimination, China agreed that it would provide the same treatment to foreign enterprises and individuals in China as is provided to Chinese enterprises. China also agreed to eliminate dual pricing practices as well as differences in treatment provided to goods produced for sale in China and those produced for export. (See the appendix for examples of rule of law-related commitments included in China’s WTO accession agreement.) Chinese government officials have stated their commitment to make WTO- related reforms that would strengthen the rule of law. Furthermore, China’s plans for reform go beyond conforming its laws and regulations to China’s WTO commitments and include a broad legal review, as well as reforms of judicial and administrative procedures. Chinese officials with whom we spoke discussed the numerous challenges they face in these areas and said that these reforms will take time to implement. They also stated their need for outside assistance to help them with their reform efforts. First, Chinese government officials are in the midst of a comprehensive, nationwide review of laws, regulations, and practices at both the central and provincial levels. This review is to lead to repeals, changes, or new laws. According to one report, Chinese officials have identified more than 170 national laws and regulations and more than 2,500 ministry regulations as being WTO related. Officials whom we interviewed from the Ministry of Foreign Trade and Economic Cooperation (MOFTEC) contend that generally China has done a good job of implementing its WTO obligations to date. MOFTEC officials said that complete implementation will take time and that part of their role is to teach other ministries how to achieve reform according to WTO commitments. They noted the importance of their efforts to coordinate WTO-related reforms with other ministries because Chinese laws tend not to be very detailed and, as a result, it is difficult to incorporate the language of specific WTO commitments into Chinese laws. Officials said that, consequently, Chinese laws will sometimes use general, open-ended phrases that refer to WTO commitments, such as the services annexes, while the detail is set forth in the implementing regulations . Provincial authorities are still reviewing their laws and regulations to see if they are consistent with national laws. Provincial-level officials told us that in some cases they were still waiting for the national government to finish its legislative and regulatory processes. This process will guide their own review of laws and regulations at their level. Prior to their enforcement, provincial-level laws, regulations, and other regulatory measures that implement the central government’s legal measures are submitted to the central government for review. Chinese officials told us that they have found many provincial regulations that did not conform to national laws and regulations. MOFTEC officials estimated that it would take a year or two to complete this entire reform process, while some provincial officials estimated 2-3 years. Second, China is undertaking reform of its judicial processes to ensure that they are compatible with its WTO commitments. The Supreme People’s Court informed us that since China’s accession it has been revising hundreds of judicial interpretations about laws that do not conform to WTO rules. It has also instructed the judiciary throughout the country to follow the revised interpretations and to undertake similar work at their respective levels. Officials told us that the court is also involved in reforms related to the WTO areas of judicial independence and uniform application of legal measures. For example, with regard to judicial independence, in February of this year the court issued new regulations to improve the adjudication of civil and commercial cases involving foreign parties. Under these regulations, mid-level and high-level courts, in contrast to the basic-level courts, will directly adjudicate cases involving, among other subjects, international trade, commercial contracts, letters of credit, and enforcement of international arbitration awards and foreign judgments. Furthermore, China recently amended its Judges Law to require that new judges pass a qualifying exam before being appointed to a judicial position. Third, China is reforming its administrative procedures and incorporating the rule of law into decision-making. About one third of the commitments we identified in China’s WTO accession agreement relate to guidance about how a particular commitment should be carried out. Officials told us that they are attempting to reduce the number of layers necessary to approve commercial activities and to make these processes more transparent. These actions can help implement rule of law practices at the day-to-day level. These reforms are also still underway at the central and provincial levels. For example, State Economic and Trade Commission (SETC) officials told us that they have identified 122 administrative procedures that must be changed to conform to WTO rules but that 40 percent of these must still be changed. In Shanghai, officials said that they have eliminated 40 percent of government approvals under their jurisdiction and that they are working to make the remaining 60 percent more efficient. Some Chinese officials with whom we spoke acknowledged challenges in completing all these reforms in a timely manner. These challenges include insufficient resources, limited knowledge of WTO requirements, and concerns about the effects on the economy of carrying out particular WTO commitments. For example, Chinese officials said that the effects of the changes needed to conform their tariff-rate quota administration process to WTO requirements were so difficult that they were unable to allocate the quota and issue certificates in time to meet the deadlines set forth in China’s WTO commitments. A number of Chinese officials also indicated that it has been very difficult to fulfill a WTO transparency commitment that requires China to translate all its trade laws, regulations, and other measures into an official WTO language—English, French, or Spanish. This difficulty is due in part to the abundance of the materials to be translated and the highly technical quality of many legal measures. Many Chinese officials we interviewed emphasized the importance of the steps they had taken at both the national and subnational levels to increase the training of government officials about WTO rules. For example, the State Economic and Trade Commission and the General Administration of Customs said they have been holding training sessions for over a year at the national, provincial, and municipal levels on general WTO rules and China’s WTO obligations. In addition, the National Judges College plans to train 1,000 judges from local courts across the country and send others for training abroad. Furthermore, governments in Shanghai, Guangzhou, and Shenzhen have established WTO affairs consultation centers that organize training and international exchange programs for midlevel Chinese officials on implementing WTO reforms. Despite these efforts, Chinese officials acknowledged that their understanding of WTO rules remains limited and that more training is needed. According to several Chinese government officials we interviewed, China continues to lack the expertise and the capacity to provide all the training necessary to implement WTO rules and, therefore, it has asked for technical assistance both multilaterally and bilaterally from outside China. As a result, the WTO secretariat, the European Union, the United States, and other WTO member countries have either given or plan to give training assistance to China in numerous areas, including rule of law-related programs. For its part, the U.S. government has provided limited training on a range of WTO-related topics, including standards, services, antidumping requirements, and intellectual property rights. The U.S. private sector also has provided technical assistance. In our interviews of U.S. businesses in China, almost one third of respondents said that they had given some assistance to China that related to implementation of China’s WTO commitments. Preliminary data from our written survey indicate that China’s WTO commitments related to rule of law reforms are some of the most important for U.S. businesses with a presence in China. For example, more than 90 percent of businesses that have responded to date indicated that the following reform commitments were important or somewhat important to their companies: consistent application of laws, regulations, and practices (within and among national, provincial & local levels); transparency of laws, regulations, and practices; enforcement of contracts and judgments/settlement of disputes; and enforcement of intellectual property rights. When asked to identify the three commitments that were most important to their companies, two WTO rule of law-related areas received the greatest number of responses in our written survey — consistent application of laws, regulations, and practices; and enforcement of intellectual property rights. We will include a more complete analysis of these and other issues considered in our business survey in a report to be released this fall. A majority of businesses answering our survey expected these rule of law commitments to be difficult for China to implement relative to its other WTO commitments. Businesses cited a number of reasons for this relative difficulty, including (1) the cultural “sea change” required to increase transparency; (2) a reluctance to crack down on intellectual property right violations stemming from a fear of destabilizing the labor force; and (3) the challenge of implementing laws, rules, and regulations consistently among provinces and within and among ministries. Similarly, in our interviews, company officials noted the magnitude of WTO-related reforms, including those that would strengthen the rule of law. They said that successful implementation would require long-term effort. Commensurate with the expected difficulty in carrying out reforms, we heard numerous specific individual complaints from U.S. companies, including concerns about vague laws and regulations that create uncertainty for foreign businesses; lack of transparency, which denied foreign companies the ability to comment on particular draft laws or regulations or to respond to administrative decisions; conflicting and inconsistent interpretations of existing laws and regulations from Chinese officials; unfair treatment by, and conflicts of interest, of Chinese regulators; and uneven or ineffective enforcement of court judgments. Nevertheless, U.S. businesses in China believe that the Chinese leadership is strongly committed to reform and that the leadership has communicated this commitment publicly. Several private sector officials noted a more open, receptive, and helpful attitude on the part of the government officials with whom they had contact. Other private sector officials noted more specific positive actions. For example, officials noted improvements in intellectual property right protections including crackdowns against counterfeiters in Shanghai, and a case where a U.S. company won a judgment against a counterfeiter in a Chinese court that included an order to cease the operations of the copycat company. First, it is very clear that China has shown considerable determination in enacting the numerous laws, regulations, and other measures to ensure that its legal system and institutions, on paper, are WTO compatible. Nevertheless, the real test of China’s movement toward a more rule of law- based commercial system is how China actually implements its laws and regulations in fulfilling its WTO commitments. At this point, it is still too early for us to make any definitive judgments about China’s actual implementation. Second, as you know, it has been the hope of U.S. government officials and others that China’s accession to the WTO would constitute a significant step forward in China’s development toward becoming a more rule of law-oriented society. It is worth noting that China’s reform efforts, which have been ongoing for more than 20 years, have included substantial legal developments that could be described as rule of law related. These include the enactment of numerous laws, regulations, and other measures that apply to many aspects of Chinese society beyond the WTO, the recent proliferation of law schools and legal training, and the recognition of the need for judicial reform. It is still too early to know where this process will lead, but there is hope that the many rules-based commitments that China made to become a WTO member will influence legal developments in other areas. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Commission may have at this time. For future contacts regarding this testimony, please call Susan Westin at (202) 512-4128. Adam Cowles, Richard Seldin, Michelle Sager, Matthew Helm, Simin Ho, Rona Mendelsohn also made key contributions to this testimony.
This testimony describes China's development of rule of law practices related to the commitments China made to the World Trade Organization (WTO), which it joined in November 2001. When China joined the WTO, it agreed that its legal measures would be consistent with its WTO obligations. GAO found 60 commitments that specifically obligate China to enact, repeal, or modify trade-related laws or regulations. In addition, China has made a substantial number of other WTO commitments related to the rule of law in transparency, judicial review, uniform enforcement of laws, and nondiscriminatory treatment. Chinese government officials described how their efforts for reform go beyond China's WTO commitments and include broad reforms of laws and regulations at the national and provincial levels, as well as reforms of judicial and administrative procedures. However, Chinese officials acknowledged the challenges they face in completing the necessary reforms and identified the need for outside training assistance. According to GAO's survey, U.S. businesses in China consider rule of law-related WTO commitments to be important, especially the consistent application of laws, regulations, and practices in China, and enforcement of intellectual property rights. However, a majority of businesses answering the survey anticipated that these rule of law commitments would be difficult for the Chinese to implement, and they identified some concerns over specific implementation issues.
Work on the CVC project has progressed in many areas, but the project completion date has slipped to October 26, 2007, about 6 weeks beyond the September 17, 2007, completion date discussed at the Subcommittee’s last CVC hearing. This 6-week slippage is due to continuing problems associated with the CVC’s fire protection system, but many other important activities, including those associated with the HVAC system, East Front, and security system, have been delayed as well. Last week, at the request of the Subcommittee and as we had recommended, AOC completed and sent to Congress an action plan for improving management execution of the project and its schedule. The action plan was responsive to our recommendation. However, it is too early to tell whether implementing the plan will curtail the types of schedule slippages that have occurred since the Subcommittee’s last CVC hearing and throughout the project. Moreover, although the CVC team and AOC’s Fire Marshal Division have agreed on a number of important elements of the CVC’s fire protection system, they have not yet agreed on all important elements. Additionally, as noted, concerns have emerged regarding the CVC’s HVAC system, as well as the impact of the passage of the sequence 2 September 15, 2006, contract completion date. Accordingly, priority should be given to accomplishing all of the identified critical tasks so that pretesting of the facility’s fire protection system can begin in the spring of 2007. Additionally, to ensure that AOC gets a high-quality, fully functional facility, it is essential that AOC effectively implement the actions it has identified and give careful consideration to existing contractual remedies available to it to achieve completion of all necessary work within the current estimated time frame at a reasonable cost without otherwise compromising any of the government’s contractual rights or remedies. According to information provided by AOC and its construction management contractor and our observations, work on the project has advanced, in terms of both the dollar value of the work in place and individual project elements. In dollar terms, AOC’s construction management contractor reported that, as of October 31, the overall CVC project was about 88 percent complete and the sequence 2 work was about 84 percent complete—up from about 86 percent and 77 percent, respectively, as of the Subcommittee’s last CVC hearing. Progress on individual project elements includes the following: Interior CVC work has moved forward, according to AOC’s construction management and sequence 2 contractors. For example, the CVC team and AOC’s Fire Marshal Division have reached or nearly reached agreement on the design for several critical elements of the facility’s fire protection system. Agreement on these elements is necessary for the system’s installation to proceed. In addition, the mechanical subcontractor has completed certain preparations for operating the CVC’s air handling units, all but two of which passed a required test for leaks as of Monday, and the CVC team expects conditioned air to begin flowing to certain parts of the facility later this month. The sequence 2 contractor has also installed about 65 percent of the CVC’s floor stone, up from about 43 percent at the time of the Subcommittee’s last CVC hearing, and ceiling installation is complete or essentially complete in the great hall, south side corridor (lower level), both orientation theaters, and the food service area. (AOC notes that blistered ceiling tile in the orientation theaters will have to be repaired or replaced.) Surface work continued, including paving and brick gutter work on the Senate plaza. Work on the House connector tunnel and on linking the Library of Congress tunnel with the Jefferson Building has also continued. East Front work continued, including completion of stone installation on the redesigned archway above the main central staircase from the CVC to the East Front and installation of ductwork and metal stud framework to support wall stone at the rotunda and gallery levels. In the House and Senate expansion spaces, ceiling close-in inspections, ceiling panel installation, and stone work have continued, and installation of the circular staircase that will connect all three levels of the Senate expansion space has begun. On November 7, 2006, AOC sent Congress an action plan setting forth a number of steps it has taken, plans to take, or is considering to ensure that the CVC is ready for occupancy in the fall of 2007. AOC developed this plan at the Subcommittee’s request in response to recommendations we made to AOC at the Subcommittee’s September 21 CVC hearing. These recommendations were aimed at enhancing AOC’s execution of the schedule and project and at facilitating the Subcommittee’s efforts to (1) hold AOC accountable for managing the project and (2) work with AOC to ensure that the schedule implications of proposed scope or design changes are quickly determined and considered by all appropriate stakeholders before final decisions on the proposed changes are made. AOC’s actions included meeting weekly with the CVC team to deal exclusively with schedule having its construction management contractor identify areas needed to meet the project’s schedule that the contractor believes are understaffed or face obstacles to progress; identifying sequence 2 and construction management personnel who are responsible for meeting key schedule dates and resolving identified problems; basing the sequence 2 contractor’s future award fee on meeting schedule reassessing the scope, depth, and time frames associated with the pretesting and final testing of the facility’s fire and life-safety protection systems; increasing communication among the CVC team, AOC’s Fire Marshal Division, and the U.S. Capitol Police; and discussing proposed significant scope or design changes with Capitol Preservation Commission representatives before such proposed changes are adopted and getting the congressional leadership’s approval for discretionary changes requested by the Senate or House. The actions AOC has identified are generally responsive to our recommendations and, if implemented effectively and quickly, should help AOC improve its project and schedule management as well as help ensure that the schedule and cost implications of proposed discretionary design or scope changes are appropriately considered before final decisions on them are made. However, we have concerns about the usefulness of one step AOC is considering—the possible establishment of a CVC peer review panel to assess the approaches planned for the fire protection system’s pretesting and final testing. We have expressed our concerns to AOC, and it has agreed to consider them. Besides the actions it identified in its November 2006 action plan, AOC has been considering how to deal with the impact of passing the sequence 2 contract completion date, September 15, 2006. This is a complex issue, in part because its resolution potentially involves preliminary determinations about the causes of, and responsibility for, project delays during sequence 2 up to September 15. AOC has also been considering other factors, such as the need to instill a sense of urgency and responsibility to meet the contractor’s currently established fall 2007 completion time frame; the possibility of setting a specific date as the new contract completion date and the implications associated with alternative dates; the constructive manner in which the sequence 2 contractor has worked with AOC and the rest of the CVC team to accomplish work and resolve problems; and the need to ensure that the work necessary to get the facility completed is done expeditiously at a reasonable cost. We have discussed these issues with AOC and pointed out that it needs to decide how it intends to proceed as quickly as possible and also consider the risks that various options pose. In view of additional schedule slippages that have occurred and issues that have arisen since the Subcommittee’s last CVC hearing, we are making additional recommendations to AOC, which we will discuss later in this testimony. In addition to the actions identified by AOC, the sequence 2 contractor has reported adding five superintendents to its CVC staff in the last several months to help achieve the schedule. Given the number and magnitude of the changes that have occurred to the sequence 2 contract since it was initially awarded and the extent to which problems have constrained progress, we believe that this additional supervision should put the team in a better position to meet schedule dates and address problems quickly. The additional time needed to make design changes to the CVC’s fire protection system has extended the project’s completion date by about 6 weeks since the Subcommittee’s September 21 CVC hearing—from September 17, 2007, according to the schedule in effect at that time, to October 26, 2007, according to the October 2006 schedule issued last week. In addition, AOC’s construction management contractor reported slippages in construction work for all of the 20 near-critical activity paths it identified in its schedule report for October 2006. For many of these activity paths, the schedule slipped at least 4 weeks. For example, the contractor reported a 65-workday delay for two East Front elevators due to late completion of necessary preceding work, a 66-workday delay for fabrication and installation of bronze doors because of fabrication problems experienced by the supplier, a 38-workday delay in ceiling close- ins in the upper level security lobby needed to resolve unexpected ceiling problems, and a 38-workday delay in completing wall stone work in the East Front basement area attributable to unanticipated design issues. The contractor also reported a 130-workday delay in the delivery of custom light fixtures, apparently the result of contractual issues between the sequence 2 contractor and its supplier. According to the construction management contractor, there are now five near-critical activity paths— including the HVAC system, East Front work, and work in the upper level security lobby and assembly rooms, for which additional slippages of 17 to 53 workdays could further delay the CVC’s completion date. Neither the September 17, 2007, nor the October 26, 2007, project completion dates included any time for (1) installing artifacts in the exhibit gallery after a certificate of occupancy has been issued, (2) preparing for operations, or (3) dealing with risks and uncertainties. AOC’s October 2006 schedule shows the artifacts installed in the exhibit gallery by November 30, 2007, but does not allow any time for dealing with risks or uncertainties associated with completing the work necessary for a certificate of occupancy, and it is not clear whether the additional time provided for installing the artifacts will be sufficient to prepare for operations. In work on the CVC’s HVAC system, AOC’s construction management contractor reported a 19-workday slippage, which the contractor attributed to a steam pipe support problem and a problem at the Capitol Power Plant. As we indicated at the Subcommittee’s last CVC hearing, we asked our mechanical engineering consultant to reassess the status of the CVC’s air handling units in early November 2006 because the CVC’s HVAC system affects many activities, has had a number of problems, and poses significant risks to the project’s successful completion. We asked the consultant to compare the units’ mechanical readiness to provide conditioned air to the CVC as of November 1 with their readiness as of his previous assessment, on September 6, 2006. On November 1, he found that the installation of controls for the air handling units was nearing completion, substantial work had been done to insulate 7 of the units, and all of the units could be ready on schedule with committed effort by the sequence 2 mechanical subcontractor. He noted, however, that except for pressure and leak testing and controls installation, little visible work had been done on 12 of the units to address the issues he had identified during his September visit. He said he did not see a large number of workers in the air handling unit areas and the work that was being done appeared to be on pipe insulation. Moreover, he saw little coordination between work on completing the air handling units and on the spaces they are to serve, and he noted a number of concerns about the operational readiness of both, indicating that delays in providing conditioned air to the facility and in balancing of the air handling units could potentially delay the project’s schedule. Even though the HVAC system’s installation and associated work are progressing, a number of issues besides those observed by our mechanical engineering consultant have arisen since the Subcommittee’s last CVC hearing, heightening our concerns about the CVC team’s ability to meet its schedule for completing and commissioning the system. Because some of the spaces to be served by the air handling units were not yet ready, the sequence 2 contractor recently decided to change the sequence in which some of the air handling units would be placed in service. However, as of last week, the technical implications of this change had not been fully determined. The commissioning contractor has questioned whether enough people will be available to support the commissioning process within the scheduled time frames, and, as noted, our mechanical engineering consultant has raised operational readiness concerns. AOC’s construction management contractor has also expressed concerns about these issues, and we have raised the issues in a number of CVC team meetings, but the responses have not given us confidence that (1) all the work associated with bringing the air handling units on line and commissioning them has been sufficiently coordinated among the team members; (2) all technical issues and risks associated with fully operating the units have been adequately addressed; and (3) that sufficient staff will be available to meet the scheduled dates. According to sequence 2 contractor personnel, these types of problems and ongoing schedule adjustments to address day-to-day events are not uncommon in large, complex construction projects. Not all the problems with the air handling units have to be resolved fully before commissioning work can proceed, they said, and air handling units are typically turned on before other work is completed to provide conditioned air for materials that need it. The sequence 2 contractor said it would work with the mechanical subcontractor and other parties to ensure that the HVAC system issues are resolved in a timely manner. Furthermore, according to the contractor personnel, contractual provisions are in place to address providing conditioned air to the CVC while construction work is underway. We understand these points and recognize the progress that has been made. However, in light of the recurring slippages in the HVAC system’s schedule, the system’s importance to the pretesting and final testing of the facility’s fire protection system, and the concerns expressed by AOC’s construction management contractor and the commissioning contractor, we believe prompt action is needed to resolve the concerns and ensure that the schedule for completing the HVAC system work is realistic and will be met. The schedule for essentially completing the construction of the House and Senate expansion spaces (currently scheduled for April 23, 2007) has slipped about 6 weeks since the Subcommittee’s last CVC hearing, and several activities important to completing these spaces have also been delayed. For example, AOC’s construction management contractor reported another 14-workday delay in completing the circular stairs in the atrium areas. Delays have also occurred in, for example, the installation of the stone arch in the House lower level, because the work is taking longer than expected, and in the installation of millwork in the House lower level, because of fabrication delays. In addition, a special fire suppression system was not installed because it had not been approved. Furthermore, the sequence 2 subcontractor doing the expansion space work identified a number of concerns that could affect the project’s completion. For example, the subcontractor reported that its schedule could be adversely affected if significant scope or design changes continue. Assuming that scope and design changes are controlled, the sequence 2 subcontractor responsible for the expansion space work hopes to recover some of the lost time and essentially complete its construction work in March 2007. In addition, the project’s schedule shows that the construction activity (excluding testing) remaining after the April 2007 essential completion date is primarily related to work necessary to complete the circular stair in the House atrium. AOC anticipates that a design change will enable the circular stairs in both the House and the Senate atriums to be completed sooner than currently scheduled. Finally, although not critical to the CVC’s opening, work being done to connect the Library of Congress’s Jefferson building to the tunnel linking it with the CVC has fallen more than 3 weeks behind since the Subcommittee’s last CVC hearing, according to the construction management contractor, at least in part, because certain stone work has taken longer to install than anticipated. The subcontractor responsible for this work, which is currently scheduled for completion on April 24, 2007, expects to recover lost time and complete the work in March 2007. Furthermore, the construction management and sequence 2 contractors report that, for a number of reasons, the work on the tunnel itself has slipped about 9 ½ weeks beyond the completion date in effect at the Subcommittee’s last CVC hearing. The four indicators of construction progress that we have been tracking for the Subcommittee, together with the risks and uncertainties that continue to face the project—which we will discuss shortly—demonstrate to us that AOC will be unlikely to meet its fall 2007 project completion date unless it significantly improves its project execution. An update on these indicators follows: Sequence 2 contractor has continued to miss most milestones. Starting with the Subcommittee’s June 2005 CVC hearing, at the Subcommittee’s request, we and AOC have been selecting and tracking sequence 2 milestones to help the Subcommittee monitor construction progress. These milestones include activities that were either on the project’s critical path or that we and AOC believe are critical to the project’s timely completion. As figure 1 shows, the sequence 2 contractor has generally missed these milestones. For today’s hearing, the contractor met or was expected to meet 4 of the 18 milestones that were due to be completed, according to the project’s September 2006 schedule, and for 1 of these 4, the work was completed ahead of schedule. However, the contractor was late in completing work for 4 other milestones and had not completed or was not expected to complete the work for the remaining 10 milestones by November 15, 2006. (See app. I.) The sequence 2 contractor attributed the slippages to a number of factors, including design issues and a need to relocate ductwork, add steel support for wall stone, and resequence work. Value of completed work has increased since the last hearing, but trend reflects the sequence 2 contractor’s difficulties in meeting scheduled completion dates. Another indicator of construction progress that we and AOC’s construction management contractor have been tracking is the value of the completed construction work billed to the government each month. Overall, the sequence 2 contractor’s monthly billings, including the bills for March through October 2006, indicate that construction work is about 2 months behind the late finish curve, which indicates completion around November 2007. While this indicator has some limitations (for example, billings lag behind construction), it is generally regarded in the construction industry as a useful measure of how likely a project is to be completed on time. Figure 2 compares the sequence 2 contractor’s billings since May 2003 with the billings needed to complete construction work on schedule and suggests that AOC faces challenges in meeting its fall 2007 completion date and is more likely to complete the facility later than its current schedule shows. Installation of interior wall and floor stone is taking longer than expected. Overall, about 86 percent of the CVC’s interior wall stone has been installed (in the CVC, East Front, atrium areas, and tunnels), according to AOC’s construction management contractor, and the sequence 2 contractor installed nearly 85,000 of the 129,780 square feet of interior floor stone required as of November 9. Although the sequence 2 contractor has installed almost all of the wall stone in the CVC itself and all of the wall stone in the atrium areas, wall stone installation in the East Front is significantly behind schedule. According to the sequence 2 contractor’s January 2006 wall stone installation schedule, the East Front wall stone was to be completely installed by July 10, 2006. As of November 10, about 4,700 pieces of wall stone remained to be installed in the East Front—the same quantity as we reported at the Subcommittee’s last CVC hearing. During the 8 weeks since that hearing, the sequence 2 contractor installed about 34,900 square feet of interior floor stone, or about 65 percent of the 52,060 square feet specified in the floor stone installation plan that the contractor had previously provided to AOC. According to the construction management contractor, the sequence 2 contractor’s installation of interior floor stone has been impeded by a lack of available space and by some work taking longer than expected. Figure 3 shows the sequence 2 contractor’s progress in installing interior floor stone since February 13, 2006. As we have indicated during the Subcommittee’s previous CVC hearings, we believe that the CVC team continues to face challenges, risks, and uncertainties in quickly completing the project. Given the project’s history of delays, the difficulties the CVC team has encountered in quickly resolving problems that arise, and the large number of near-critical activities that can affect the project’s overall completion, the CVC team’s efforts to identify potential problems early and resolve issues quickly will be even more important from this point forward, because AOC has left no “slack” in the schedule for contingencies. In our view, the remaining work associated with the fire protection and HVAC systems poses the greatest risks to meeting AOC’s fall 2007 project completion date. The steps AOC has taken to mitigate these risks have been helpful, but much work remains to be done on these systems and on their linkages with other building systems. In addition, the project continues to face risks and uncertainties associated with other work important to its completion, such as the East Front, and additional design or scope changes. The project’s current schedule does not provide the 2 to 3 months that a previous schedule allowed for addressing ongoing challenges, risks, and uncertainties. Accordingly, we plan to continue to monitor the CVC team’s efforts to meet its schedule for the fire protection, HVAC, security, and other building systems and other key near-critical activities as well as the timeliness of the actions taken by the CVC team to address problems, concerns, and questions that arise. A brief update follows on the challenges, risks, and uncertainties the CVC team continues to face and the team’s plans for addressing them: Complex building systems remain a significant risk. The CVC will house complex building systems, including HVAC, fire protection, and security systems. These systems not only have to perform well individually, but their operation also has to be integrated. If the CVC team encounters any significant problems with them, either separately or together, during the resolution of design issues, installation, commissioning, or testing, the project could be seriously delayed. The unanticipated problems that emerged in reviewing the design of the fire alarm system and in programming it illustrate the impact such problems can have on the project’s schedule. AOC’s Fire Marshal Division and the CVC team have recently made considerable progress in reaching agreement on the design of a number of important elements of the CVC’s fire protection system that are important to the purchasing and installation of wiring and equipment. As of November 13, the Fire Marshal Division had approved or essentially agreed to the designs of the sprinkler, smoke control, and emergency public address systems as well as most aspects of the CVC’s and East Front’s fire alarm systems that are related to the ordering and installation of wiring and equipment. According to the Fire Marshal Division, any outstanding comments on these system elements are minor. On the other hand, agreement has not yet been reached on a number of other system elements, including the sequence of operations for the CVC fire alarm system, the design for the special fire protection system in the exhibit gallery, and the plan for final acceptance testing of the facility’s fire protection system. A sequence 2 subcontractor has identified dates by which certain elements must be approved to avoid further delays. Thus, additional delays could occur if the team takes longer than expected to get necessary remaining approvals or if the fire protection system does not work effectively individually or in concert with the security or other building systems. It is because of constraints such as these that we believe it is so important to address open issues associated with the HVAC system and to continue coordination with the U.S. Capitol Police on the security system. Since the Subcommittee’s last CVC hearing, the Capitol Police have identified another security problem that will require additional work. The impact of this work, if any, had not been determined as of November 9. Building design and work scope continue to evolve. The CVC has undergone a number of design and work scope changes. Since September 15, 2006, AOC’s architectural contractor has issued five design changes or clarifications. As of November 8, 2006, this contractor reported, another four were in process. In addition, since the project began, AOC has executed over 100 sequence 2 contract modifications for work that was not anticipated. Some of these changes, such as changes in the exhibit gallery and in the East Front, have resulted in delays. Furthermore, although shop drawings have been approved for almost all project elements, according to AOC, further design or scope changes in various project elements are likely, given the project’s experience to date. Project design and scope changes are typically reflected in the development of potential change orders (PCO), many of which result in contract modifications. Figure 4 shows the PCOs submitted for consideration for sequences 1 and 2 since September 2003. Although PCOs are not always approved, they are often regarded as a reasonably good indicator of likely future design or scope changes that can affect a project’s cost and schedule. Even more important, the adverse impact of scope and design changes on a project’s schedule is likely to increase as the project moves toward completion. As the figure indicates, new PCOs for sequence 1 were submitted until shortly before, and even for several months after, November 2004, when AOC determined that the sequence 1 contract work was substantially complete. Similarly, PCOs for sequence 2 are still being submitted, and we have seen no indication that their submission is likely to stop soon. It therefore appears likely to us that some of the design or scope changes indicated in PCOs could lead to contract modifications that will affect the project’s schedule. AOC agrees that it is important to minimize the impact of proposed design and scope changes. Trade stacking could delay completion. As we discussed during the Subcommittee’s previous CVC hearings, trade stacking could hold up finish work, such as drywall or ceiling installation, electrical and plumbing work, plastering, or floor stone installation. This work could be stacked because of delays in wall stone installation. Trade stacking could also increase the risk of accidents and injuries. Hence, it remains important, as we said at previous CVC hearings, for the CVC team to closely monitor construction to identify potential trade stacking and promptly take steps to address it. The CVC team has also identified trade stacking as a high risk. The sequence 2 contractor has developed plans that show when various subcontractors will be working in various areas of the CVC. According to the sequence 2 contractor, it has been continuing to meet regularly with its subcontractors to identify and resolve potential issues. The CVC team identified instances of trade stacking that occurred in an effort to expedite certain East Front work and in doing millwork and stone work in the orientation theaters. AOC’s construction management contractor has noted trade stacking as a potential issue associated with the compressed time frame for bringing all of the air handling units on line. Additional delays associated with the CVC’s new utility tunnel have resulted, or could result, in additional work or slippages. The delay in starting up the utility tunnel’s operations has necessitated the use of temporary humidity control equipment for several areas to avoid damage to finish work and ceiling tile. Such delays may subject certain work to the risk of damage or may delay finish or ceiling work in areas not suitable for the use of temporary humidity and temperature control equipment. For example, the CVC team installed ceiling tile in portions of the great hall to take advantage of the scaffolding in place, even though neither the temperature nor the humidity was controlled in that area. According to the CVC team, the installed tile could be damaged if the temperature or humidity is not within specified levels, and certain exhibit gallery woodwork has been delayed because conditioned air has not been available. Although the CVC team expected in early August to be providing dehumidified air to the exhibit gallery by mid-August, the sequence 2 contractor now expects to begin providing conditioned air to the CVC later this month. However, as noted, the contractor has resequenced the order for bringing some air handling units on line because some spaces— including the exhibit gallery, which was slated to receive conditioned air first—were not clean enough for the units to operate. The air handling unit serving the exhibit gallery is now expected to come on line early in December. Remaining risks include having sufficient manpower to meet the scheduled dates for getting the HVAC system fully operational, having sufficiently clean spaces, and being able to quickly overcome any problems that may arise in getting the system properly balanced, controlled, and commissioned, including providing enough manpower without causing trade stacking. Late identification or slow resolution of problems or issues could delay completion. Historically, the project has experienced or been at risk of experiencing some delays resulting from slow decision-making. In addition, some CVC team members believe that some of the problems that have resulted in delays, such as certain problems associated with the East Front or with problematic sequence 1 concrete work could have been identified and addressed earlier than they were. In responding to these comments, the sequence 2 contractor said that although earlier identification of these types of problems is conceptually possible, it is difficult in practice. Looking forward, we do not believe that the team will be able to meet its scheduled completion date if it does not quickly decide on issues; respond to concerns, questions, and submittals; or resolve problems. In September 2006, AOC told the CVC team that starting October 1, the architectural contractor would be decreasing its staff support to the project. In our opinion, this change increased the risk of slow responses to design questions or requests for design instructions at a very critical time, particularly because we have not seen evidence of a decrease in potential change orders. AOC believes that it will be able to provide its CVC construction contractors with sufficient architectural support to respond to appropriate questions or requests in time to avoid delays. We believe that this situation needs close monitoring as well as corrective action if problems arise. AOC has not reported any problems in this area since the last CVC hearing, and has identified steps in its November 2006 action plan aimed at identifying and resolving design problems quickly. Finally, as we noted earlier in our testimony today, AOC’s delay analysis is even more critical given the passage of the sequence 2 September 15, 2006, contract completion date and the need to obtain a complete facility without further delays and unreasonable costs, including delay-related costs. On April 11, 2006, AOC executed a contract modification authorizing its construction management contractor to have one of its managers who has not been involved in the CVC project assess the adequacy of this type of information. The manager submitted his report to AOC in early June. He reported generally positive findings but also identified desired improvements. He made several recommendations to AOC, which AOC has generally agreed with and plans to implement consistent with the availability of resources. The October project schedule shows that almost all physical construction work on the CVC, the East Front, and the expansion spaces will be completed by spring 2007 and that the pretesting and final testing of all fire protection, life safety, and related systems for these areas will be carried out between then and late October 2007. This schedule reflects the amount of time that AOC’s Chief Fire Marshal said he would need to perform his acceptance testing, although the CVC team is working to see if certain aspects of the testing can be done differently to save some time. The October 2006 schedule also calls for completing the installation of artifacts in the exhibit gallery by November 30, 2007. However, this schedule does not allow any time for addressing problems, risks, or uncertainties associated with obtaining a certificate of occupancy or for preparing for operations. Given the uncertainty about how much time will be needed to pretest the fire protection system, the concerns associated with the HVAC system, the unknown effectiveness of AOC’s recently identified actions to curtail future schedule slippages, and the limited amount of time we had to assess the October project schedule, we do not feel that we are in position to suggest a definitive project completion date. However, in light of the work we have done, we do not believe AOC will be able to complete the project by fall 2007 if the actions it has identified are not effective in curtailing future schedule slippages. Thus, until we see that AOC has satisfactorily addressed our schedule-related concerns, we believe that the project is more likely to be completed in early 2008 rather than in the fall of 2007. To minimize the risks associated with the CVC’s HVAC system and the government’s ability to get the CVC completed within the current schedule and cost estimates and to give Congress and us greater confidence in the CVC team’s project schedules from this point forward, we recommend that the Architect of the Capitol promptly take the following two actions: Work with the rest of the CVC team to ensure that the schedule for completing and commissioning the HVAC system is realistic, that all the work necessary for the proper and safe functioning of the HVAC system— including work in the spaces the air handling units are to serve—is completed in a timely, well-coordinated manner, and that sufficient resources will be available to meet the schedule without creating a trade- stacking problem. Carefully consider the contractual remedies available to AOC to complete all tasks that must precede the start and completion of final acceptance testing of the CVC’s fire protection and life safety systems within the time necessary to meet the estimated fall 2007 project completion time frame. AOC generally agreed with our recommendations. Since the Subcommittee’s September 21 CVC hearing, we have added about $8 million to our estimate of the total cost of the CVC project at completion. This increase reflects a rough estimate of the impact on the project’s cost of the 6-week delay associated with the fire protection system and other scope and design changes identified during the past 8 weeks; however, the actual costs for changes are not yet known, and we have not had sufficient time to fully assess the CVC team’s cost estimates incorporated in our estimate. With this approximately $8 million increase, we now estimate, on the basis of our limited review, that the total cost of the entire CVC project at completion is likely to be about $592 million without an allowance for risks and uncertainties. We nevertheless recognize that the project continues to face a number of uncertainties, including uncertainty over the extent of AOC’s responsibility for the delay- related costs. (We have not updated our estimate of the project’s cost at completion with an allowance for risks and uncertainties.) To date, about $531 million has been provided for CVC construction. This amount includes about $3.9 million that was made available for either CVC construction or operations and has been approved for CVC construction by the House and Senate Committees on Appropriations. An earlier cost- to-complete estimate, prepared for the Subcommittee’s March 2006 CVC hearing, showed that another $26 million in construction funds would be necessary to reach the previous cost estimate of $556 million, which did not include an allowance for risks and uncertainties. AOC has requested this additional $26 million in its fiscal year 2007 budget for CVC construction. AOC has also requested $950,000 in fiscal year 2007 general administration appropriation funds to provide contractual support for the Chief Fire Marshal’s final acceptance testing of the CVC. During fiscal year 2007, AOC is also likely to need, but has not yet requested, additional funds to pay for changes. At the Subcommittee’s last CVC hearing, we roughly estimated that AOC would need an additional $5 million to $10 million in fiscal year 2007 for changes unless it decides to use funds slated for other purposes, after obtaining the necessary congressional approvals. AOC agrees with this rough estimate at this time and notes that it would likely need additional funding in fiscal year 2008 to replenish these funds and to cover certain additional costs if they materialize. Mr. Chairman, this completes our prepared statement. We would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Bernard Ungar at (202) 512-4232 or Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Shirley Abel, John Craig, Maria Edelstein, Elizabeth Eisenstadt, Jeanette Franzel, Jackie Hamilton, Bradley James, Joshua Ormond, and Scott Riback. Elevator VC # 17 East Front Elevator VC # 12 Orientation Theater Great Hall AHU # 3 &16 een completed, the litered pnel will need to e repired or replced. Contining effort re eing mde to ndernd nd develop the quence of opertion (CONOP) mtrix requirement. The mtrix muspproved y Decemer 8, 2006, to void n impct on the criticl pth. Enogh of the dctwork has een relocted to llow hood inlltion to egin. Crrently three of the ix hoodve een inlled. The bance of hood inlltion i chedled to e complete y Novemer 24, 2006. Control pnel re et nd opertionl. Thi ctivity inclded etting cab hell withot finihe. The cab vendor decided to prefinih the cabsther thn to finih the hell on ite. Finihed cabs re on ite nd preption re eing mde for inlltion thi week. ffold for ceiling inlltion has een removed, ffolding has een erected long the wll in the th theter to inll wood pnel. Thi ffolding will ffect inlltion of the tone ir tep. work issentilly completed. nel inlltion egn on Novemer 9, 2006. Plaster ceilingve een completed in the min lobbre nd th assemly room. Hnging of the north assemly room ceiling egn on Novemer 7, 2006. Inlltion of the nitrt frming was delyed ecause of trctl deign concern. Additionl cross cing was dded to tiffen the assemly. Wll tone inlltion i to egin thi week. The work has een delyed ecaustrctteel was dded to support the metd wll t the easide of ir #37. Upon the completion of metd wll, the tone work i chedled to egin. The revied rt dte for wll tone on the principl level i Novemer 30, 2006. Fabric ceiling pnel inlltion has een delyed ecause of del in necessary preceding East Front work—completion of the East Front rchwtone, ceiling, nd etor inlltion. Inlltion of the fabric pnel crrently cnnot e completed ntil the etor truss re et to cler the floor re. Setting of the trussrrently projected to e completed y the end of Juary 2007. Because of above-ceiling conflict, the work was requenced to llow the floor tone inlltion to proceed hed of the ceiling work. Hnging of bulkhe rted in the th creening re on Novemer 2, 2006 nd i expected to e completed in Novemer 2006. niclly redy prioritieve een requenced. AHU #1 has een witched with AHU #3 nd 16, which re now chedled for Novemer 15, 2006. AOC’ contrction mgement contrctor elieve tht thi ctivity issentilly complete. AHU #1 i now chedled for Decemer 6, 2006. quence 2 nd contrction mgement contrctor expect thi work to e essentilly completed y cloe of business tody. The rt of wood pnel inlltion i pending hmidity control within the ce. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
We are pleased to assist the Senate Committee on Appropriations, Subcommittee on the Legislative Branch in monitoring progress on the Capitol Visitor Center (CVC) project. Our remarks will focus on (1) the Architect of the Capitol's (AOC) construction progress and problems since the Subcommittee's September 21, 2006, hearing and their impact on the project's schedule; and (2) the project's expected cost at completion and funding situation. As part of this discussion, we will address a number of key challenges and risks that continue to face the project as well as actions AOC has recently taken, and plans or needs to take, to meet its currently scheduled completion date. Since the Subcommittee's September 21 CVC hearing, the CVC team has continued to move the project's construction forward, but the project's scheduled completion date has slipped by 6 weeks, to October 26, 2007,3 and further delays are possible. The 6-week delay was attributable to problems with the project's most critical activity--the fire protection system. Under the current schedule, the construction of the House and Senate expansions spaces will be completed before the CVC's construction, but both the CVC and the expansion spaces will be available for occupancy at the same time because final acceptance testing of both is slated to be done concurrently. During the past month, the CVC team has made progress on the project's heating, ventilation, and air-conditioning (HVAC) system, interior floor stone and ceiling installation, and other interior and exterior construction work. In addition, AOC sent Congress an action plan for improving its execution of the project and the project's schedule, as the Subcommittee requested and we had recommended, and this plan is responsive to our recommendations. AOC is also considering other action not discussed in this plan. Despite this progress, problems have occurred in many important activities besides the CVC's fire protection system, according to AOC's construction management contractor. Although these delays did not add time to the project's schedule this month, additional delays could do so in the future. Recently identified issues associated with the CVC's HVAC system, fire protection system, and security system--including issues associated with their coordination and testing--also pose risks to the project's scheduled completion date. In addition, concerns have arisen about AOC's ability to achieve a high-quality, complete, and usable facility within the current estimated time frame and cost now that the contractual date for completing sequence 2 construction work--September 15, 2006--has passed. In particular, there is a risk that, without negative consequences, the resolve of some major stakeholders to complete the project in a timely and efficient manner could be adversely affected. Finally, all the indicators of progress that we have been tracking for the Subcommittee, together with other risks and uncertainties, suggest that the project is likely to finish later than October 2007. It is too early to tell whether the actions identified in AOC's November 2006 action plan will be effective in curtailing additional schedule slippages. Furthermore, the concerns identified since the Subcommittee's last CVC hearing, particularly those related to the CVC's HVAC system, if not quickly addressed, could adversely affect the project's schedule. Thus, until it is clear that AOC's actions are effective in curtailing additional schedule slippages, we believe that the facility is more likely to be completed in early 2008 than in the fall of 2007.
The number of biosafety level (BSL)-3 and BSL-4 laboratories (high- containment laboratories) began to rise in the late 1990s, accelerating after the anthrax attacks throughout the United States. The laboratories expanded across federal, state, academic, and private sectors. Information about their number, location, activities, and ownership is available for high-containment laboratories registered with CDC’s Division of Select Agent and Toxins (DSAT) or the U.S. Department of Agriculture’s (USDA) Animal and Plant Health Inspection Service (APHIS) as part of the Federal Select Agent Program. These entities register laboratories that work with select agents that have specific potential human, animal, or plant health risks. Other high-containment laboratories work with other pathogens that may also be dangerous but are not identified as “select agents” and therefore these laboratories are not required to register with DSAT or APHIS. We reported in 2009 that information about these non-select agent laboratories is not known. Our work has found that expansion of high-containment laboratories was not based on a government-wide coordinated strategy. The expansion was based on the perceptions of individual agencies about the capacity required for their individual missions and the high-containment laboratory activities needed to meet those missions, as well as the availability of congressionally approved funding. Decisions to fund the construction of high-containment laboratories were made by multiple federal agencies (e.g., Department of Health and Human Services (HHS), Department of Defense, USDA), in multiple budget cycles. Federal and state agencies, academia, and the private sector (such as drug companies) considered their individual requirements, but as we have previously reported a robust assessment of national needs was lacking. Since each agency or organization has a different mission, an assessment of needs, by definition, was at the discretion of the agency or organization. We have not found any national research agenda linking all these agencies, even at the federal level, that would allow for a national needs assessment, strategic plan, or coordinated oversight. As we last reported in 2013, after more than 12 years, we have not been able to find any detailed projections based on a government-wide strategic evaluation of research requirements based on public health or national security needs. Without this information, there is little assurance of having facilities with the right capacity to meet our national needs. This deficiency may be more critical today than 5 years ago when we first reported on this concern because current budget constraints make prioritization essential. Our work on this issue has found a continued lack of national standards for designing, constructing, commissioning, and operating high- containment laboratories. These laboratories are expensive to build, operate, and maintain. For example, we noted in our 2009 report that the absence of national standards means that the laboratories may vary from place to place because of differences in local building requirements or standards for safe operations. In 2007, while investigating a power outage at one of its recently constructed BSL-4 laboratory, CDC determined that construction workers digging at an adjacent site had some time earlier cut a critical grounding cable buried outside the building. CDC facility managers had not noticed that cutting the grounding cable had compromised the electrical system of the facility that housed the BSL-4 laboratory. It became apparent that the building’s integrity as it related to the adjacent construction had not been adequately supervised. In 2009, CDC officials told us that standard procedures under local building codes did not require monitoring of the new BSL-4 facility’s electrical grounding. This incident highlighted the risk of relying on local building codes to ensure the safety of high-containment laboratories in the absence of national standards or testing procedures specific to those laboratories. Some guidance exists about designing, constructing, and operating high- containment laboratories. The Biosafety in Microbiological and Biomedical Laboratories guidance, often referred to as BMBL recommends various design, construction and operations standards, but our work has found it is not universally followed. of whether the suggested design, construction, and operations standards are achieved. As we have recommended, national standards would be valuable for not only new laboratory construction but also periodic upgrades. Such standards need not be constrained in a “one-size fits all” model but could help specify the levels of facility performance that should be achieved. Department of Health and Human Services (Washington, D.C., 2007), Biosafety in Microbiological and Biomedical Laboratories, 5th ed. HHS has developed and provided biosafety guidelines outlined in this manual. developed by the funding or regulatory agencies. In 2013, we reported that another challenge of this fragmented oversight is the potential duplication and overlap of inspection activities in the regulation of high- containment laboratories. We recommended that CDC and APHIS work with the internal inspectors for Department of Defense and Department of Homeland Security to coordinate inspections and ensure the application of consistent inspection standards. According to most experts that we have spoken to in the course of our work, a baseline risk is associated with any high-containment laboratory. Although technology and improved scientific practice guidance have reduced the risk in high-containment laboratories, the risk is not zero (as illustrated by the recent incidents and others during the past decade). According to CDC officials, the risks from accidental exposure or release can never be completely eliminated and even laboratories within sophisticated biological research programs—including those most extensively regulated—has and will continue to have safety failures. Many experts agree that as the number of high-containment laboratories has increased, so the overall risk of an accidental or deliberate release of a dangerous pathogen will also increase. Oversight is critical in improving biosafety and ensuring that high- containment laboratories comply with regulations. However, our work has found that aspects of the current oversight programs provided by DSAT and APHIS depend on entities’ monitoring themselves and reporting incidents to the regulators. For example, with respect to a certification that a select agent had been rendered sterile (that is, noninfectious), DSAT officials told us, citing the June 2014 updated guidance, that “the burden of validating non-viability and non-functionality remains on the individual or entity possessing the select agent, toxin, or regulated nucleic acid.” While DSAT does not approve each entity’s scientific procedure, DSAT strongly recommends that “an entity maintain information on file in support of the method used for rendering a select agent non-viable . . . so that the entity is able to demonstrate that the agent . . . is no longer subject to the select agent regulations.” Biosafety select agent regulations and oversight critically rely on laboratories promptly reporting any incidents that may expose employees or the public to infectious pathogens. Although laboratories have been reasonably conscientious about reporting such incidents, there is evidence that not all have been reported promptly. The June 2014 incident in which live anthrax bacteria were transferred from a BSL-3 contained environment to lower-level (BSL-2) containment laboratories at CDC in Atlanta resulted in the potential exposure of tens of workers to the highly virulent Ames strain of anthrax. According to CDC’s report, on June 5, a laboratory scientist in the BSL-3 Bioterrorism Rapid Response and Advanced Technology (BRRAT) laboratory prepared protein extracts from eight bacterial select agents, including Bacillus anthracis, under high-containment (BSL-3) conditions.were being prepared for analysis by matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry, a relatively new technology that can be used for rapid bacterial species identification. Also, according to CDC officials that we spoke to this protein extraction procedure was being evaluated in a preliminary assessment of whether MALDI-TOF mass spectrometry could provide a cheaper and faster way to detect a range of pathogenic agents, including anthrax, compared to conventional methods and thus could be used by emergency response laboratories. According to CDC officials, the researchers intended to use the data collected in this experiment to submit a joint proposal to CDC’s Office of Public Health Preparedness and Response to fund further evaluation of the MALDI TOF method These samples because MALDI TOF is increasingly being used by clinical and hospital laboratories for infectious disease diagnostics. The protein extraction procedure was chemically based and intended to render the pathogens noninfectious, which alternative extraction procedures would have done using heat, radiation, or other chemical treatments that took longer. The procedure that was used to extract the proteins was not based on a standard operating procedure that had been documented as appropriate for all the pathogens in the experiment and reviewed by more senior scientific or management officials. Rather, the scientists used a procedure identified by the MALDI TOF equipment manufacturer that had not been tested for effectiveness, in particular, for rendering spore-forming organisms such as anthrax noninfectious. Following that procedure, the eight pathogens were exposed to chemical treatment for 10 minutes and then plated (spread on plates to test for sterility or noninfectious status) and incubated for 24 hours. According to CDC, on June 6, when no growth was observed on sterility plates after 24 hours, the remaining samples, which had been held in the chemical solution for 24 hours, were moved to CDC BSL-2 laboratories for testing using the MALDI TOF technology. Importantly, the plates containing the original sterility samples were left in the incubation chamber rather than destroyed as would normally occur because of technical problems with the autoclave that would have been used for destruction. According to CDC officials, on June 13, a laboratory scientist in the BRRAT laboratory observed unexpected growth on the anthrax sterility plate, possibly indicating that the sample was still infectious. (All the other pathogen protein samples showed no evidence of growth.) That scientist and a colleague immediately reported the discovery to the CDC Select Agent Responsible Official (RO) in accordance with the BRRAT Laboratory Incident Response Plan. That report triggered a response that immediately recovered the samples that had been sent to the BSL-2 laboratories and returned them to BSL-3 containment, and a response effort that lasted a number of days was implemented to identify any CDC employees who might have been affected by exposure to live anthrax spores. (The details of the subsequent actions and CDC’s lessons learned and proposed actions are described in CDC’s July 11, 2014, Report on Potential Exposure to Anthrax. That report indicates that none of the potentially affected employees experienced anthrax-related adverse medical symptoms.) Our preliminary analysis indicates that the BRRAT laboratory was using a MALDI-TOF MS method that had been designed for protein extraction but not for the inactivation of pathogens and that it did not have a standard operating procedure (SOP) or protocol on inactivation. We did not find a complete set of SOPs for removing agents from a BSL-3 laboratory in a safe manner. Further, neither the preparing (BRRAT BSL-3) laboratory nor the receiving laboratory (BRRAT BSL-2) laboratory conducted sterility testing. Moreover, the BRRAT laboratory did not have a kill curve based on multiple concentration levels. When we visited CDC on July 8, it became apparent to us, that a major cause of this incident was the implementation of an experiment to prepare protein extractions for testing using the MALDI TOF technology that was not based on a validated standard operating procedure. acknowledged that significant and relevant studies in the scientific literature about chemical procedures studied for preparing protein samples for use in the MALDI TOF technology, were successful in rendering tested pathogens noninfectious, except for anthrax. The literature clearly recommends an additional filtering step before concluding that the anthrax samples are not infectious. Our preliminary work indicates that this step was not followed for all the materials in this incident. Validating a procedure or method provides a defined level of statistical confidence in the results of the procedure or method. staff did not perform sterility testing on the suspension received in March 2004. CDC’s 2004 report further stated that “Research laboratory workers should assume that all inactivated B. anthracis suspension materials are infectious until inactivation is adequately confirmed [using BSL-2 laboratory procedures].” These recommendations are relevant to the June 2014 incident in Atlanta but were not followed. The laboratories receiving the protein extractions were BSL-2 laboratories, but the activities associated with testing with the MALDI TOF technology were conducted on open laboratory benches, not using biocontainment cabinets otherwise available in such laboratories. CDC’s July 11, 2014, Report on the Potential Exposure to Anthrax describes a number of actions that CDC plans to take within its responsibilities to avoid another incident like the one in June. However, we continue to believe that a national strategy is warranted that would evaluate the requirements for high-containment laboratories, set and maintain national standards for such laboratories’ construction and operation, and maintain a national strategy for the oversight of laboratories that conduct important research on highly infectious pathogens. This completes my formal statement, Chairman Murphy, Ranking Member DeGette and members of the committee. I am happy to answer any questions you may have. For future contacts regarding this statement, please contact Nancy Kingsbury at (202) 512-2700 or at kingsburyn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Sushil Sharma, Ph.D., Dr.PH, Assistant Director; and Elaine L. Vaurio also made key contributions to this statement. High-Containment Laboratories: Assessment of the Nation’s Need Is Missing. GAO-13-466R. Washington, D.C.: February 25, 2013. Biological Laboratories: Design and Implementation Considerations for Safety Reporting Systems. GAO-10-850. Washington, D.C.: September 10, 2010. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1045T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1036T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-574. Washington, D.C.: September 21, 2009. Biological Research: Observations on DHS’s Analyses Concerning Whether FMD Research Can Be Done as Safely on the Mainland as on Plum Island. GAO-09-747. Washington, D.C.: July 30, 2009. High-Containment Biosafety Laboratories: DHS Lacks Evidence to Conclude That Foot-and-Mouth Disease Research Can Be Done Safely on the U.S. Mainland. GAO-08-821T. Washington, D.C.: May 22, 2008. High-Containment Biosafety Laboratories: Preliminary Observations on the Oversight of the Proliferation of BSL-3 and BSL-4 Laboratories in the United States. GAO-08-108T. Washington, D.C.: October 4, 2007. Biological Research Laboratories: Issues Associated with the Expansion of Laboratories Funded by the National Institute of Allergy and Infectious Diseases. GAO-07-333R. Washington, D.C.: February 22, 2007. Homeland Security: CDC’s Oversight of the Select Agent Program GAO-03-315R. Washington, D.C.: November 22, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent biosecurity incidents—such as the June 5, 2014, potential exposure of staff in Atlanta laboratories at the Centers for Disease Control and Prevention (CDC) to live spores of a strain of anthrax—highlight the importance of maintaining biosafety and biosecurity protocols at high-containment laboratories. This statement summarizes the results of GAO's past work on the oversight of high-containment laboratories, those designed for handling dangerous pathogens and emerging infectious diseases. Specifically, this statement addresses (1) the need for governmentwide strategic planning for the requirements for high-containment laboratories, including assessment of their risks; (2) the need for national standards for designing, constructing, commissioning, operating, and maintaining such laboratories; and (3) the oversight of biosafety and biosecurity at high-containment laboratories. In addition, it provides GAO's preliminary observations on the potential exposure of CDC staff to anthrax. For this preliminary work, GAO reviewed agency documents, including a report on the potential exposure, and scientific literature; and interviewed CDC officials. No federal entity is responsible for strategic planning and oversight of high-containment laboratories. Since the 1990s, the number of high-containment laboratories has risen; however, the expansion of high-containment laboratories was not based on a government-wide coordinated strategy. Instead, the expansion was based on the perceptions of individual agencies about the capacity required for their individual missions and the high-containment laboratory activities needed to meet those missions, as well as the availability of congressionally approved funding. Consequent to this mode of expansion, there was no research agenda linking all these agencies, even at the federal level, that would allow for a national needs assessment, strategic plan, or coordinated oversight. As GAO last reported in 2013, after more than 12 years, GAO has not been able to find any detailed projections based on a government-wide strategic evaluation of research requirements based on public health or national security needs. Without this information, there is little assurance of having facilities with the right capacity to meet the nation's needs. GAO's past work has found a continued lack of national standards for designing, constructing, commissioning, and operating high-containment laboratories. As noted in a 2009 report, the absence of national standards means that the laboratories may vary from place to place because of differences in local building requirements or standards for safe operations. Some guidance exists about designing, constructing, and operating high-containment laboratories. Specifically, the Biosafety in Microbiological and Biomedical Laboratories guidance recommends various design, construction, and operations standards, but GAO's work has found it is not universally followed. The guidance also does not recommend an assessment of whether the suggested design, construction, and operational standards are achieved. As GAO has reported, national standards are valuable not only in relation to new laboratory construction but also in ensuring compliance for periodic upgrades. No one agency is responsible for determining the aggregate or cumulative risks associated with the continued expansion of high-containment laboratories; according to experts and federal officials GAO interviewed for prior work, the oversight of these laboratories is fragmented and largely self-policing. On July 11, 2014, the Centers for Disease Control and Prevention (CDC) released a report on the potential exposure to anthrax that described a number of actions that CDC plans to take within its responsibilities to avoid another incident like the one in June. The incident in June was caused when a laboratory scientist inadvertently failed to sterilize plates containing samples of anthrax, derived with a new method, and transferred them to a facility with lower biosecurity protocols. This incident and the inherent risks of biosecurity highlight the need for a national strategy to evaluate the requirements for high-containment laboratories, set and maintain national standards for such laboratories' construction and operation, and maintain a national strategy for the oversight of laboratories that conduct important work on highly infectious pathogens. This testimony contains no new recommendations, but GAO has made recommendations in prior reports to responsible agencies.
ISR directly supports the planning and conduct of current and future military operations. ISR encompasses multiple activities related to the planning and operation of sensors and assets that collect, process, and disseminate data. Intelligence data can take many forms, including electro- optical (EO) and infrared (IR) images, full-motion video (FMV), images from a synthetic aperture radar (SAR), electronics intelligence (ELINT), communications intelligence (COMINT), and measures and signature intelligence (MASINT). These data can come from a variety of sources, including surveillance and reconnaissance systems that operate in space or on manned or unmanned systems. Data can also come from systems that are ground- or sea-based or from human intelligence teams. Table 1 summarizes the ISR programs that we reviewed, 13 of which are in development. (A brief description of each of the 20 programs we reviewed is provided in app. II.) DOD plans significant investments in airborne ISR systems. For example, between fiscal years 2007 and 2013, DOD plans to invest $28.8 billion in the 20 systems we reviewed. (See table 2.) Congress has also recognized the need in acquiring UAS. For example, it added funding between fiscal years 2003 and 2005 to enable the Air Force to accelerate procurement of the Reaper UAS. Over those 3 years, Congress increased the Reaper budget over $70 million, directing the Air Force to procure a total of 8 additional air vehicles. Similarly, the Navy Fire Scout budget was increased by $17 million in fiscal year 2006 to procure 2 additional air vehicles. In fiscal year 2003, the Global Hawk budget was increased by $90 million, primarily to develop advanced payloads for signals and imagery intelligence capabilities. Nearly all of the 13 airborne ISR programs in development that we reviewed have experienced some cost or schedule growth. Cost and schedule growth in these programs is largely the result of a poor business case or acquisition strategy that failed to capture sufficient knowledge about the product technologies and design before committing to the development and demonstration of a new system. Significant delays in the delivery of some new systems, breaking the investment strategy (for the new and legacy systems to be replaced) established at the start of these acquisition programs, have required DOD to make additional unplanned investments in legacy systems in order to keep them relevant and operational longer than planned. Delays in the Aerial Common Sensor (ACS) aircraft have required DOD to make additional unplanned investments in three Army and Navy legacy aircraft systems in order to keep them relevant and operational longer than planned. These additional investments, totaling about $900 million, represent opportunity costs that could have been used for other needs within DOD. Of the 13 airborne ISR programs in development, 1 has experienced significant cost growth and 9 have experienced schedule delays that range from 2 months to 60 months. Table 3 summarizes ISR programs that have encountered problems either in development or as they prepared to begin the system development and demonstration phase of an acquisition program. Many of these programs began development without an executable business case and did not have a good acquisition strategy to capture critical system knowledge at the key decision milestones. Our work on best commercial practices has shown that before a company invests in product development, it should develop a sound business case—one that validates user requirements and determines that the concept can be successfully developed with existing resources—to minimize the risks associated with such a commitment. For DOD, an executable business case provides demonstrated evidence that (1) the warfighter need is valid and that it can best be met with the chosen concept, and (2) the concept can be developed and produced with proven technologies, existing design knowledge, and available funding and time. To implement the business case, programs must develop a realistic acquisition strategy, one that includes critical program knowledge—such as technology maturity, system design, and manufacturing and production processes—at key points in the acquisition. DOD’s acquisition policy endorses a knowledge- based approach to acquisition and includes strategies to reduce technology, integration, design, manufacturing, and production risks. Global Hawk is an example of a program that failed to implement best practices for developing a new weapon system and encountered significant cost and schedule problems. It initially began with an incremental acquisition strategy that approached best practice standards for technology and design maturity. However, after development began the Air Force abandoned this strategy and radically restructured the program to develop and acquire a larger, more advanced aircraft that would have multimission capabilities (both signals intelligence and imagery intelligence sensors on the same aircraft). This new strategy called for concurrent technology development, design, test, integration, and production in a compressed schedule. As a result, the program has been rebaselined four times, the development schedule has been extended by 3 years, and the program has experienced a substantial contract cost overrun. Development costs alone have increased over 260 percent. In addition, unit costs have increased to the point where statutory reporting thresholds were triggered, requiring DOD to recertify the fundamental program need to Congress. The ACS and Global Hawk programs’ failures to develop an executable acquisition strategy have resulted in significant delays in delivering required capabilities to the warfighter at the time overall investment decisions were made. These delays will have significant implications for legacy systems. Specifically, the services must now make difficult decisions about investing in legacy systems to keep them operational until the new systems have been developed and fielded. The Army’s termination of the ACS system development and demonstration contract could have significant cost, schedule, and performance impacts on three legacy airborne systems in the ISR portfolio—the Army’s Guardrail Common Sensor aircraft (GRCS) and Airborne Reconnaissance Low aircraft (ARL) and the Navy’s EP-3 aircraft. The Army and the Navy had planned a phased approach to field ACS and retire the legacy systems from the inventory with a minimal investment in maintaining legacy systems. In the fiscal year 2004 budget, the Army had planned for small investments in GRCS and ARL because it expected to begin replacing them with ACS in 2009. In that same budget, the Navy’s request reflected its plan to modify the EP-3. By the time DOD submitted its fiscal year 2008 budget, both services recognized the need to keep legacy systems capable because the ACS development contract was canceled. Therefore, the budget included funding to keep these legacy systems operational for a longer period of time. Since the termination of the ACS development contract, the program has reverted to a technology development stage as the Army restructures the program. ACS is scheduled to restart system development and demonstration in 2009, 5 years later than the initial development decision. Although the Army has not established a new date for initial operating capacity, that date is also likely to slip by 5 years to fiscal year 2014. The cost to keep GRCS and ARL mission equipment viable and the platforms airworthy during this time is estimated to be $562 million between fiscal years 2008 and 2013, an increase of $550 million over what had been previously planned. Without these improvements, the systems will not remain capable against modern threats, possibly resulting in a gap in ISR capabilities on the battlefield. In addition, the platforms could not continue to fly during this time frame without increased structural and avionic modifications. The Navy had planned to replace its EP-3 with ACS and begin fielding the new system in fiscal year 2012. After the Army terminated the ACS development contract, the Navy considered remaining part of the Army’s development effort. However, according to Navy officials, the Chief of Naval Operations directed the Navy to proceed with a separate development effort, designated the EPX. The Navy now plans to proceed with system development and demonstration in the fourth quarter of fiscal year 2010. The Navy has not established a date to begin fielding the new system, but that is not likely to take place before 2017. This will be a 5-year slip in retiring the oldest EP-3 systems and will make modifications to those systems necessary so that they can remain in the field until the Navy achieves full operating capacity for its EPX. The Navy plans to invest $823 million between fiscal years 2008 and 2013 to modify the EP-3, an increase of 73 percent over the $475 million that was previously planned. Table 4 summarizes the budgetary impact of the delay in developing and fielding ACS on the legacy systems it was to replace. The Air Force plans to replace the U-2 with the Global Hawk, but delays in the Global Hawk program have contributed to the need to keep the U-2 in the inventory longer than anticipated. In December 2005, the Air Force had planned to begin retiring the U-2 in fiscal year 2007 and complete the retirement by fiscal year 2012. Although the next configuration of the Global Hawk (with limited signals intelligence capability) is scheduled for delivery in fiscal year 2009, it will not have the same capability as the U-2. The version of the Global Hawk that plans to include a more robust signals intelligence capability is scheduled to begin deliveries in 2012. The Air Force is now developing a plan to fully retire the U-2s a year later, in 2013, and at a slower rate than in the 2005 plan. There are no funds in the budget beyond fiscal year 2006, but Air Force officials stated they intend to fund projects necessary to keep the U-2 capable. Figure 1 shows the rate at which the Air Force had planned to retire the U-2 and the revised retirement plan compared to Global Hawk fielding. Among the ISR acquisition programs we reviewed, we found specific cases where the military services have successfully collaborated and achieved savings of time and resources. The Army estimated that for its Fire Scout program, buying common components with the Navy Fire Scout program would save $200 million in development costs alone and that there were greater opportunities for savings. However, we also found cases where more collaboration is needed to provide greater efficiencies and jointness in developing more affordable new systems and to close gaps in capabilities. These programs include the potential for greater collaboration between the Navy Broad Area Maritime Surveillance (BAMS) and the Air Force Global Hawk programs, and the Air Force Predator and Army Warrior programs. In 2000, the Navy began development of the Fire Scout, a vertical takeoff and landing UAS for use on surface ships. At the same time, the Army began concept development for its Future Combat Systems (FCS) program. The Army Fire Scout was selected for the Future Combat Systems in 2003. Although these programs were not required to work jointly or collaborate, Army Fire Scout program managers contacted their counterparts in the Navy to determine whether efficiencies could be achieved through collaboration. Officials from the two programs met several times to share information on their respective aircraft’s configuration, performance requirements, testing, support, and other issues. Initially the requirements for the two systems were quite different. For example, the Army’s UAS had four rotor blades and a larger engine, while Navy’s system had three rotor blades and a smaller engine. However, after discussions, the Navy officials determined that the Army’s Fire Scout aircraft would meet their needs and decided to switch to the Army’s configuration. Both services are buying common components, such as the air vehicle and flight components, under one contract. An Army program management official estimated that the savings to the Army in research and development alone would be about $200 million. As both programs mature, the official believes additional synergies and savings could be realized through contract price breaks on quantities and shared test assets, such as air vehicles, support equipment, and test components. Jointly acquiring common hardware under one contract will also reduce procurement administrative lead time and permit common design, tooling, and testing. Finally, future payload development, such as communications, sensors, and data links, could be procured jointly. In 2000, the Navy identified a mission need for a broad area maritime and littoral ISR capability and on the basis of a 2002 analysis of alternatives, the Navy decided to pursue a manned platform, the Multi-mission Maritime Aircraft (MMA), with an unmanned adjunct, BAMS. The Navy subsequently performed an analysis of alternatives for the BAMS program, which identified several potential alternatives; foremost among them was the Global Hawk system. As a risk reduction effort, the Navy funded the Global Hawk Maritime Demonstration program in 2003. Working through the existing Air Force contract, the Navy procured two Global Hawk UAS and associated ground controls and equipment. The demonstration program was expected to leverage the existing Global Hawk system to develop tactics, training, and techniques for maritime mission applications. The BAMS program is at a critical juncture. It released a request for proposals in February 2007 and plans to proceed with system development and demonstration in October 2007. If the Global Hawk (or another existing system like the Air Force Reaper) is selected, there are opportunities for the Navy to work with the Air Force and take advantage of its knowledge on the existing platform. Through collaboration, the Navy could leverage knowledge early in the acquisition process and avoid or reduce costs for design, new tooling, and manufacturing, and streamline contracting and acquisition processes. Despite similarities in the Predator and Warrior programs, the Air Force and Army have repeatedly resisted collaboration. The Air Force’s Predator is a legacy program that has been operational since 1995. Its persistent surveillance/full motion video capability continues to be a valued asset to the warfighter. However, when the Army began in 2001 to define requirements for the Warrior, a system similar to the Predator, it did not fully explore potential synergies and efficiencies with the Air Force program. The Army did not perform an analysis of alternatives to explore other options to a new system; it cited the urgent need of battlefield commanders for the capability. In lieu of an analysis of alternatives, the Army conducted a full and open competition and awarded the contract to the same contractor producing the Predator. Although the requirements for the Warrior were subsequently validated, reviewing officials from the Air Force and the Joint Staff raised concerns about duplication of an existing capability. Both Congress and the Office of the Secretary of Defense (OSD) have raised concerns about duplication between the two systems. During question and answer sessions at various Congressional hearings, members of Congress sought an explanation of the need for both systems. In addition, OSD commissioned an industrial capabilities study to assess whether the contractor for the Predator and the Warrior had sufficient capacity to produce both systems at the same time. While the study did not find any major production constraints, it concluded that the two systems were 80 percent common. In January 2006, the Army and Air Force agreed to consider cooperating on the acquisition of the two systems. However, progress to date in implementing the agreement has been limited due partly to differences in operating concepts for the two services. Unlike the Air Force, the Army does not use rated pilots; it relies on technicians and automated takeoff and landing equipment. In addition, the Army uses direct-line-of-sight communications, while the Air Force uses beyond-line- of-sight communications. Despite these inherent differences, there are still many areas available for collaboration, including airframes, ground stations, and equipment. The Air Force and the Army are currently working to identify program synergies in a three-phased approach: First, the Air Force will acquire and test two of the more modern Warrior airframes. Second, the two services will compare their requirements for ground control stations and automated takeoff and landing. Finally, the Army and Air Force plan to compare concepts of operation and training requirements for additional synergies. To date, the Army has coordinated the proposed approach through the Vice Chief of Staff level, but the agreement has not yet been approved by the Department of Army. The Air Force is still working to resolve comments and concerns at lower organizational levels. In the interim, the Air Force has greatly increased the number of Predator aircraft it plans to procure annually to meet the high demand from the warfighter for this capability, increased in part by the war on terror. Instead of buying 7 Predator aircraft per year, as the Air Force had initially planned, it now plans to buy 24 aircraft in both 2007 and 2008, as well as another 22 aircraft as stated in the fiscal year 2007 supplemental request. In total, the Air Force plans to buy 160 Predators between fiscal years 2008 and 2013. The Air Force is currently seeking authority to become the executive agent for medium- and high-altitude UAS operating above 3,500 feet. As a part of its efforts, in March 2007 the Air Force began a comprehensive study of all existing and planned (airborne and space-based) ISR systems. As executive agent, the Air Force believes it could improve the allocation of UAS, avoid duplication of separate service acquisition efforts by centralizing procurement, standardize downlinks, and control burgeoning bandwidth requirements. However, the Air Force still intends to procure two Warriors for testing, but details of a potential collaboration with the Army remain uncertain. Timing on the Army and Air Force’s collaboration is critical: The longer the services wait to collaborate, the lower the return. The opportunity to achieve synergies in design, manufacturing, and support will greatly diminish as the Warrior matures and more and more Predators are added to the inventory. The environment in which DOD operates has changed significantly since 2001. In recognition of this, the department’s 2006 Quadrennial Defense Review described a vision that focuses on defining ISR needs based on the type of intelligence or sensor rather than on the platform that carried the sensor. Specifically, the department’s vision for ISR is to establish persistent surveillance over the battlefield and define ISR sensor needs in terms of the type of intelligence needed rather than the air, surface, or space platform in which they operate. Accordingly, the department initiated a number of studies aimed at reviewing ISR requirements and developing solutions either through new development programs or changes in current systems (see app. III for a brief description of these studies). While most of the studies have been completed, as of March 2007, DOD had released the results of only one—the Joint ISR study, which validated the requirement and confirmed the continued need for the Army’s ACS program. The results of the other studies have not been released outside of DOD, but according to DOD officials, several were briefed to senior leadership within OSD and the Joint Staff. One study DOD is undertaking has some promise to better manage the requirements for future ISR capabilities across DOD by applying a joint capability portfolio management concept to acquisition planning. This pilot program is a test case to enable DOD to develop and manage ISR capabilities across the entire department—rather than by military service or individual program—and by doing so, to improve the interoperability of future capabilities, minimize capability redundancies and gaps, and maximize capability effectiveness. However, the portfolios are largely advisory and will, as a first step, provide input to decisions made through the acquisition and budgeting process. At this point the capability portfolio managers have not been given direct authority to manage fiscal resources and make investment decisions. Without portfolios in which managers have authority and control over resources, DOD is at risk of continuing to develop and acquire systems in a stovepiped manner, and of not knowing whether its systems are being developed within available resources. In addition to the various studies previously initiated, two more studies were recently commissioned in February and March of 2007. The Under Secretary of Defense for Acquisition, Technology, and Logistics requested that the Defense Science Board establish a task force to assess whether current and planned ISR systems provide sufficient support for U.S. military forces. The objectives of the study are to (1) determine what improvements are needed for ISR systems, (2) examine the balance and mix of sensors to identify gaps and redundancies, and (3) identify vulnerabilities, potential problems, and consistency with DOD network centered strategy. The Secretary also asked the task force to review the findings of previous studies as part of the assessment. In addition, the Chief of Staff of the Air Force recently began a comprehensive study of all existing and planned airborne and space-based ISR systems to determine their efficiencies and inefficiencies. The effort includes developing a plan to increase interdependence of medium- and high-altitude UAS and establish the Air Force as the executive agent for all UAS in those regimes. A specific date for reporting the results of these two studies has not been established. Many ISR systems suffer from the same cost, schedule, and performance problems as other DOD acquisition programs by failing to establish a good business case or capture critical product knowledge at key decision points before moving forward in the acquisition process. In some cases, the outcomes have been costly as legacy systems, once planned for an earlier retirement, must now stay in the inventory, requiring additional unplanned investments to keep them relevant and operationally ready until a new capability can be fielded. The funds spent to keep these systems viable represent opportunity costs that could have been used for other DOD priorities. GAO has made numerous recommendations in recent years to improve the acquisition process and get more predictable outcomes in major acquisition programs, and these would apply to the development of ISR systems. Ideally, because of the warfighter’s universal needs for ISR information, determining requirements and planning for ISR acquisition programs should be based on a joint process that occurs at the enterprise level in DOD to ensure economies and efficiencies based on effective joint solutions to the maximum extent possible. DOD has various studies in process that appear to have this as a goal for ISR, but for now it is not routinely happening. The portfolio management pilot program could potentially improve how DOD determines requirements and sets up new acquisition programs for ISR capabilities. However, the portfolios are largely advisory, and the managers have no direct authority to make investment decisions. Without authority and control over investments there is the risk that nothing will change. At best for now, there are some acquisition programs that through their own initiative have garnered benefits from collaborative efforts. Others still choose a stovepiped approach to provide a unique system for the specific military service’s needs. While DOD has numerous ISR studies, either recently completed or ongoing, there have been no substantive actions recently implemented to gain greater jointness in ISR acquisition programs. Therefore, we recommend that DOD 1. Develop and implement an integrated enterprise-level investment strategy approach that is based on a joint assessment of warfighting needs and a full set of potential and viable alternative solutions, considering cross-service solutions including new acquisitions and modifications to legacy systems within realistic and affordable budget projections for DOD. This strategy should draw on the results of ongoing studies, like the portfolio management pilot program, but should include the necessary authority and controls needed to ensure a single point of accountability for resource decisions. 2. Report to the defense committees by August 1, 2007, the results of the ISR studies and identify the specific plans and actions needed and intended to make joint acquisition decisions in ISR programs and improve the way it plans, buys, organizes, manages, and executes its ISR acquisition programs and operations. DOD provided us with written comments on a draft of this report. The comments appear in appendix IV. DOD agreed that it can report the interim status of ongoing ISR studies to the committees by August 1, 2007, but suggested that delaying this reporting until December 31, 2007, would allow the department to include the results of two pertinent studies now ongoing. We believe a full reporting in December 2007 would be useful if it includes DOD’s detailed plans on how it will achieve an integrated enterprise-level investment strategy for ISR including planned changes to policy and guidance, organization, and points of authority and responsibility. However, we believe an interim reporting to the committees on the results and planned outcomes from completed studies should be provided to the committees by August 2007. DOD agreed with our recommendation to develop and implement an integrated enterprise-level investment strategy for ISR and stated that it thought this process was well under way in existing department processes. However, it non-concurred with having a single point of authority and control for ISR resource decisions and provided a number of arguments as to why sufficient information was not included in the report to support this specific part of the recommendation. We continue to believe that our recommendation for an enterprise-level investment strategy with a single point of accountability for resources decisions is necessary to maximize to the full extent efficiency and effectiveness in acquiring major acquisition systems. The Defense Science Board Summer Study on Transformation reported in February 2006 came to similar conclusions: that the Secretary of Defense should assemble a small direct-reporting cell to create and maintain a metric-based, multiyear plan that specifies what is to be done, when, with what resources, and with what capability output. It concluded the Under Secretary of Defense for Acquisition, Technology, and Logistics needs authority over architectures, resources, and personnel. Our other review efforts of the acquisition and requirements processes continue to show that DOD has not sufficiently improved the process to ensure cross- service redundancies are reduced or eliminated where possible. Therefore, without this single point of authority, limited defense resources are still not optimally used to develop and produce weapon systems. Our comments below address the specific arguments presented in DOD’s response to this report. We believe that many of the ongoing initiatives to achieve a greater integrated investment strategy approach for ISR are steps in the right direction but are concerned that they will not go far enough to address the problems that have occurred in DOD acquisitions for some time now. DOD suggests that the Joint Capabilities Integration and Development System (JCIDS) has been implemented to identify joint warfighting capabilities. We agree that the JCIDS emphasizes a more joint approach to identifying and prioritizing warfighting needs. However, as reported in our March 30, 2007 report, Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes, this system is still not working as planned. Despite the provisions of JCIDS, needs continue to be based on investment decision-making processes that do not function together to ensure DOD pursues needs that are not redundant. The Warrior decision is an example where the service chose to ignore the recommendations of the Joint Requirements Oversight Council and proceeded with a unique program. DOD stated that its Portfolio Management Experiment supports this enterprise-level strategy, but it is still a pilot program and actual changes to the processes have not been identified to show how it will ensure more responsible and joint decision making for major acquisition programs. As pointed out in the report, while this seems like a good first step, portfolios are largely advisory and managers have not been given direct authority to manage fiscal resources and make investment decisions. Without this authority, DOD continues to risk stovepiped solutions that may overlap and not be affordable within available resources. Furthermore, it seems within the last few years the real input from DOD leadership comes at the end of the year, right before the budget is supposed to go to Congress. In December each year a Program Budget document is issued by the Office of the Secretary of Defense that has included radical changes to major acquisition programs but without the transparency as to the detailed analysis and integrated investment planning that should have taken place to make these major investment decisions. In its response, DOD also states that a number of successes have occurred within the Unmanned Aerial Systems portfolio managed by the Office of Under Secretary of Defense for Acquisition, Technology, and Logistics. While there may be some successful UAS programs, there are also examples of large, important programs that have significantly exceeded cost estimates and delivery dates. We believe that having a UAS portfolio is contrary to the direction of the Quadrennial Defense Review to get away from “platform”-based decisions and move toward “sensor”-based decisions. The Battlespace Awareness Functional Capabilities Board, as part of the JCIDS process, seems to be a more representative grouping of ISR programs than the UAS portfolio. We believe if properly organized based more on “sensor” requirements, then it would not be necessary to have both for ISR investment decision making. DOD states that we did not consider the department’s ongoing efforts to develop UAS and ISR Roadmaps that represent, according to them, enterprise-level strategies. While we did not review these as part of this review, GAO has ongoing work under a different engagement that is looking at the ISR Roadmap. The initial conclusions from that review were presented to the House Armed Services Subcommittee on Air and Land Forces in testimony on April 19, 2007. GAO testified that the ISR Roadmap was a noteworthy step in examining ISR capabilities but it does not represent a comprehensive vision for the ISR enterprise or define strategy to guide future investments. Furthermore, the ISR Roadmap is managed by the Office of the Under Secretary of Defense for Intelligence, while the UAS Roadmap is managed by the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. This difference emphasizes the need for a single point for ISR investment decisions within OSD. Finally, DOD states that the report does not recognize ground component requirements and operating concepts for multiple joint missions and that it did not recognize the benefits of acquisition programs with unique requirements or the benefits of competition. We believe the report, as it relates to the decision to buy a unique platform for the Warrior, did recognize the difference in how the two services planned to operate the platforms. However, we do not believe that it necessarily excuses DOD to buy two different platforms to satisfy the warfighter’s expressed ISR requirement. Furthermore, we believe it has been the unique stovepiped solutions of the military services that have over time created unnecessary duplication and inefficient use of limited defense funding. As to competition, GAO has consistently expressed its belief that with proper controls and oversight competition is beneficial to price, reliability, performance, and contractor responsiveness in buying major weapon systems. We are sending copies of this report to the Secretary of Defense and interested congressional committees. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Michael Hazard, Assistant Director; Dayna Foster; Rae Ann Sapp; Michael Aiken; and Karen Sloan. This report examines the Department of Defense (DOD) development and acquisition of airborne intelligence, surveillance, and reconnaissance (ISR) systems. The primary focus of this work is to identify practices and policies that lead to successful fielding of weapon systems to the warfighter at the right time and for the right price. Specifically, our objectives were to (1) evaluate various ISR platforms for potential synergies and assess their cost and schedule status and the impact of any increases or delays on legacy systems and (2) evaluate the effectiveness of ISR investment decisions. Our work was conducted between June 2006 and April 2007 in accordance with generally accepted government auditing standards. We selected 20 major airborne ISR programs in technology or systems development, already fielded but undergoing significant upgrade, or operating in the field but due to be replaced by a system in development and one space-based program in technology development. We considered a program in development to be major if DOD designated it as a major defense acquisition program or would be likely do so at Milestone B. We considered systems already operating in the field as major if they played a role in current operations. For the systems we selected, we obtained information on current or projected operational capabilities, acquisition plans, cost estimates, schedules, and estimated budgets. We analyzed the data to determine whether pairs of similar systems shared common operating concepts, capabilities, physical configurations, or primary contractors. We reviewed acquisition plans for programs in development to determine whether they had established sound business cases or if not, where the business case was weak. We reviewed cost and schedule estimates to determine whether they had increased and, where possible, identified reasons for the increases. For systems in development that experienced a schedule delay, we determined whether the delay had an impact on the legacy system to be replaced and, where possible, determined the cost or capability impact of the delay. We assessed the reliability and validity of agency official- provided and third-party data by discussing the data with officials from multiple agencies at varying levels of responsibility. We also discussed the results of our reviews and analyses with program office officials; Army, Navy, and Air Force acquisition and requirements officials; the Office of the Under Secretary of Defense for Intelligence; and the Office of the Joint Chief of Staff for Intelligence. The Army is planning to develop the Aerial Common Sensor (ACS) as an airborne ISR and target acquisition system and is designing it to provide timely intelligence data on threat forces to the land component commander. The platform will be a piloted business jet and will carry a suite of sensors to collect information on electronics and communications signals, optical and infrared images, measures and signatures, and synthetic aperture radar (SAR) images. Four onboard intelligence analysts will operate the mission equipment, but with the appropriate connectivity, the system can perform its mission with just the flight crew. The ACS will replace the Guardrail Common Sensor and the Airborne Reconnaissance Low airborne systems and will coexist with current systems until it is phased in and current systems retire. The Army has not established a date for initial operating capacity. ACS was to have replaced the Navy EP-3 as well. However, the Navy recently decided to pursue its own development program and expects to enter system development in 2010. Airborne Reconnaissance Low (ARL) is composed of communications intelligence and imagery intelligence sensors and onboard operators in a piloted aircraft. The current inventory includes two configurations; one with a complete communications sensor package capable of intercepting and locating radio emissions and providing reports to appropriate commanders and intelligence-processing centers on the ground. The more capable version combines communications and electro-optical (EO) sensors and SAR with moving target indicator onto one aircraft. The ARL will eventually be replaced by ACS. The Airborne Signals Intelligence Payload (ASIP) is a signals intelligence (SIGINT) sensor being developed for use on multiple Air Force platforms. It is a part of Air Force efforts to modernize its SIGINT processes by developing an Air Force-wide capability for performing SIGINT. ASIP sensors will be developed for use on the legacy U-2 and Rivet Joint manned aircraft. It will also be used on legacy and developmental unmanned aerial systems (UAS) to include the MQ-1 (Predator) and RQ-4B Global Hawk. The details about its capabilities are classified. The Broad Area Maritime Surveillance (BAMS) UAS is scheduled to begin systems development in October 2007. The BAMS system will be land- based and provide a high-altitude, persistent ISR capability to the fleet and joint forces commander. BAMS will conduct continuous maritime and littoral surveillance of targets. As part of the Navy’s maritime patrol and reconnaissance force, it will operate independently or in conjunction with the Multi-mission Maritime Aircraft (MMA) and EP-3/EPX signals intelligence platform. Because the BAMS has not yet begun system development, vehicle design and sensor payload decisions have not been finalized, but will include active imaging radar, passive optical imaging, and limited signals collection capability. Its projected initial operational capability is 2013. The E-10A Program originally consisted of three primary elements: the aircraft, the radar, and the battle management command and control system. The aircraft proposed for the E-1OA was the Boeing 767 jet aircraft. The radar was to be the Multi-Platform Radar Technology Insertion Program, an advanced radar that provides capability for cruise missile defense through air moving target indicator as well as enhanced ground moving target indicator. The program was reduced from a technology development program to a demonstration effort. The demonstration effort was focused on assessing the newer radar, which will also be used on the Global Hawk UAS. However, the Air Force recently canceled the demonstration effort. The EP-3E Airborne Reconnaissance Integrated Electronics System (ARIES) II is the Navy’s only land-based SIGINT reconnaissance aircraft. It is a legacy aircraft based on the Navy’s Orion P-3 airframe and provides fleet and theater commanders worldwide with near-real-time tactical SIGINT. It uses sensitive receivers and high-gain dish antennas to perform its mission. The Navy had planned to replace this aircraft with the Army ACS because the EP-3 airframe is aging and has a limited life span. Drawdown of the EP-3E aircraft was scheduled to begin in the 2012 time frame but may be extended. Delays in ACS development contributed to the Navy’s recent decision to pursue its own replacement for the EP-3. The EPX is the Navy’s replacement for its aging EP-3. In late summer 2006, after a study on joint ISR requirements had been completed, the Navy and Army concluded that there were significant requirements differences between the two services. As a result, the Chief of Naval Operations directed the Navy to recapitalize the EP-3 to provide multi-intelligence capability. While requirements for the EPX have not been fully established, it will be a multi-intelligence platform and will include communications and electronics intelligence capability, optics, and radar. EPX is part of the maritime patrol and MMA and BAMS. The Army Fire Scout is being developed as one of the UAS within the Future Combat Systems. As part of this system of systems, the Fire Scout is designed to support air-ground operations and reconnaissance, surveillance, and target acquisition missions. It will employ SAR with moving target indicator, EO sensors and a laser rangefinder/designator, a tactical signals intelligence package, and the joint tactical radio system communications suite. The Fire Scout is designed to take off and land in unimproved areas to directly support brigade combat team operations. Its initial operating capability is tied to the Future Combat Systems, which is planned for December 2014. The Navy Fire Scout, or the vertical takeoff and landing unmanned aerial vehicle system, (VTUAV), entered systems development in February 2000. The Fire Scout is designed to provide ISR as well as targeting data and damage assessments to tactical users. It is capable of autonomous vertical takeoff and landing on aircraft carriers as well as unprepared landing zones. The Fire Scout includes EO/IR sensors, a laser designator system, and a common automatic recovery system. The modular payload approach also includes the tactical control system, tactical common datalink, and a mine detection system. Its initial operating capability is planned for October 2008. Joint Surveillance, Target, Attack, Radar System (STARS) is a joint Air Force and Army wide area surveillance attack radar system designed to detect, track, and classify and support the attack of moving and stationary targets. Joint STARS is a legacy platform first used in the 1991 Gulf War. It has been used extensively in support of Operations Enduring and Iraqi Freedom. The Joint STARS fleet of aircraft is currently being modified with new communication and navigation equipment, and the Air Force is developing advanced mission capabilities and identifying low-cost emerging technologies for future use. In addition, the Air Force intends to replace Joint STARS engines to make the platform more reliable and reduce operating and support costs. Finally, the Air Force had originally intended to place Multi-Platform Radar Technology Insertion Program (MP-RTIP) on Joint STARS but decided not to when it chose to go forward with the E-10A, which was subsequently canceled. The Global Hawk is a high-altitude, long-endurance UAS designed to provide near-real-time high-resolution ISR imagery. It employs a SAR, ground moving target indicator, and EO/IR sensors. After a successful technology demonstration, the Global Hawk entered development and limited production in March 2001. Production of the initial seven (RQ-4A) aircraft is complete. The larger, more capable version (RQ-4B) includes an advanced signals intelligence payload and improved radar technologies. Initial operational capability is planned for September 2007. Guardrail Common Sensor (GRCS) is an airborne signals intelligence collection location and exploitation system in the current inventory that provides near-real-time signals intelligence and targeting information to tactical commanders. The system integrates a communications intelligence sensor and precision geolocation of signals. The platform is a small, piloted aircraft with no onboard analysts. The Army plans on eventually replacing GRCS with the ACS. The Navy’s MMA is part of the broad area maritime family of systems. The MMA was initially planned to interoperate with the BAMS UAS and the ACS. The MMA is intended to replace the Navy’s P-3C Orion system. Its primary role will be that of anti-submarine and anti-surface warfare, and it will have some ISR capability. The Navy plans for the aircraft to achieve initial operational capability in 2013. The MP-RTIP is a family of scalable, advanced radars that are being developed for the RQ-4B Global Hawk and the E-10A. The Air Force funded the sensor development under the E-10A budget line as a separate item. The radar is currently in system development and demonstration. However, in February 2007, the Air Force removed funding for the E-10A radar development program starting in fiscal year 2008. The Air Force still intends to develop the radar for the Global Hawk and begin fielding the sensor by 2011. The Predator is a medium-altitude long-endurance UAS. The Predator began as an advanced concept technology demonstration program and has been operational since 1995. Originally designed as a persistent ISR platform, it was modified in 2001 to carry two Hellfire missiles. The Predator employs EO/IR sensors, laser designator, day/night cameras that produce full motion video of the battlefield, and can be configured to carry SAR. Used as an armed reconnaissance system, the Predator also has a multi-spectral targeting system with Hellfire missile targeting capability. The Air Force has begun an effort to develop and integrate signals intelligence capability on the Predator. To accelerate this effort, the Air Force increased this budget by a factor of almost 6 in fiscal year 2008. The Reaper (formerly Predator B) is a multirole medium- to high-altitude endurance UAS. Its primary mission is a persistent hunter-killer for small ground mobile or fixed targets. Its secondary mission is to gather ISR data. It will use EO/IR sensors, laser rangefinder/designator, and SAR, and will carry ordnance such as the Joint Direct Attack Munitions and Hellfire missiles. The Reaper entered systems development in February 2004. Its initial operating capability is planned for 2009. The Air Force has begun to examine the feasibility of incorporating signals intelligence capability on the Reaper. Rivet Joint (RJ) is a reconnaissance aircraft in the current inventory that supports theater- and national-level consumers with near-real-time on- scene intelligence collection, analysis, and dissemination capabilities. The aircraft is an extensively modified C-135 with a suite of onboard sensors, which allows the mission crew to detect, identify, and geolocate signals throughout the electromagnetic spectrum. The mission crew can then forward gathered information in a variety of formats to a wide range of consumers via the system’s extensive communications suite. The interior seats 34 people, including the cockpit crew, electronic warfare officers, intelligence operators, and in-flight maintenance technicians. The first versions of the system were deployed in 1964, but have undergone extensive upgrades to both the platform and mission equipment. The Air Force does not have any plans to replace the system. Space Radar (SR) is an Air Force-led, joint DOD, and intelligence community program to develop a satellite to find, identify, and monitor moving or stationary targets under all weather conditions on a nearly continuous basis across large swaths of the earth’s surface. As envisioned, SR would generate volumes of radar imagery for transmission to ground-, ship-, air-, and space-based systems. Initial capability is planned for 2017. The U-2 provides continuous day-and-night, high-altitude, all-weather surveillance and reconnaissance in direct support of U.S. and allied ground forces. It is a single-engine and single-seat ISR aircraft. The U-2 is capable of collecting multisensor, photo, EO/IR, and radar imagery as well as collecting SIGINT data. It can downlink all data except wet film. The Air Force proposed to begin retiring the U-2 in 2007. However, Congress disagreed with the decision and prevented retirement in 2007. Congress also directed the Air Force to first certify that the capability was no longer required. In March 2007, the Air Force revised the schedule from removing the U-2 from the inventory and proposes doing so at a slower rate than before beginning in fiscal year 2008. The Air Force is not requesting funding for the U-2 past 2007, but it is not clear whether the Air Force has provided the certification that Congress requested. The extended range, multipurpose Warrior UAS began systems development in April 2005. It will operate with manned aviation assets such as the Apache helicopter and perform missions including reconnaissance, surveillance, and target acquisition/attack. It is being developed to satisfy the Army’s requirement for a UAS that is dedicated to the direct operational control of its field commanders. The Warrior is designed with an automatic takeoff and landing system, full motion video capability, tactical signals intelligence payload, multirole tactical common data link, EO sensors, SAR/ moving target indicator, Ethernet communications capability, and redundant avionics. Its initial operational capability is planned for 2010. Program Decision Memorandum III, dated December 2005 directed that several studies be undertaken. Those studies included the following. The Army and Navy, in coordination with the Air Force, Joint Staff, Under Secretary of Defense for Policy, Under Secretary of Defense for Intelligence (USD(I)), and Program Analysis and Evaluation (PA&E), were directed to conduct a study of joint multi-intelligence airborne ISR needs, focusing on trade-offs among manned and unmanned airborne platforms and how those trade-offs translate into requirements for recapitalizing the Army, Navy, and Air Force legacy systems. The participants were directed to identify any resources in addition to the fiscal year 2006 President’s budget program of record to sustain the Army and Navy aircraft until they can be replaced. The study was completed in late summer of 2006 and concluded that the requirements for the ACS were still valid. The Strategic Command, in coordination with the Air Force; Navy; Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); USD(I); and PA&E were directed to review the Air Force’s Global Hawk acquisition and U-2 retirement plan and determine if high- attitude, long-endurance, multi-intelligence ISR requirements will be satisfied during the transition. The findings were briefed within the Office of the Secretary of Defense (OSD) in early fall 2006. USD(I), in conjunction with the Joint Staff, services, and PA&E was directed to develop a methodology to migrate to a capability-centric focus, instead of a platform-centric focus, for determining combatant commander and joint task force airborne ISR requirements. Results were briefed within OSD in early fall 2006. On March 5, 2007, the Air Force Chief of Staff announced the start of a comprehensive study of all existing and planned ISR systems—both airborne and spaced-based—to consider the efficiencies and inefficiencies in the theater and global warfighting templates. As part of this broad effort, he advocated that the Air Force immediately become the executive agent for medium- and high-altitude UAS. The expected benefits from the study and executive agent concept include improving distribution of intelligence assets across all theaters and components, avoiding duplication of acquisition efforts, standardizing UAS operations and downlinks, and controlling ballooning bandwidth requirements. The results of the study will include a comprehensive plan to optimize ISR capabilities, due in late April 2007. In February 2007, the Under Secretary of Defense for Acquisition, Technology, and Logistics requested that the Defense Science Board establish a task force to assess whether current and planned ISR systems provide sufficient support for U.S. military forces. The primary objective is to determine what improvements are needed in carrying out the tasks associated with ISR systems. A second objective is to examine the mix and balance of ISR sensors to identify gaps and redundancies. The task force was also asked to examine current and planned systems for vulnerabilities, new opportunities and potential problems, and consistency with department strategy for networked operations. Finally, the memorandum also asked the task force to review the results of a number of studies, initiated by OSD and completed in the fall of 2006, following the completion of the 2006 Quadrennial Defense Review. Several of these studies are summarized in this appendix. The tasking memorandum did not include time frames for completion of the study or for reporting the results.
The Department of Defense (DOD) is experiencing a growing demand for intelligence, surveillance, and reconnaissance (ISR) assets to provide vital information in support of military operations. Over the next 7 years, DOD plans to invest over $28 billion in existing and new airborne ISR acquisition systems. This represents a marked increase over prior ISR investments. Given the significant investments, GAO was asked to (1) evaluate various ISR platforms for potential synergies and assess their cost and schedule status and the impact of any increases or delays on legacy systems and (2) assess the effectiveness of ISR investment decisions. To assess cost and schedule status, we reviewed programmatic and budget documentation. To evaluate investment decisions, we collected information on system capability, mission, and concept of operation and analyzed the data for similarities. DOD plans to invest over $28 billion over the next 7 years to develop, procure, and modernize 20 major airborne intelligence, surveillance and reconnaissance systems. Nearly all of the systems in development have experienced cost growth or schedule delays. These problems have delayed the fielding of a warfighting capability and have resulted in program restructuring, cancellation, or unplanned investments in older legacy ISR systems. For example, problems in developing the Aerial Common Sensor affected three legacy programs, increasing their collective budgets by 185 percent, or nearly $900 million. In many cases, GAO found that the newer ISR programs lacked a solid business case or a knowledge-based acquisition strategy before entering the development process. A good business case requires the manager to match the system requirements with mature technologies and a system design that can be built. This requires sufficient knowledge about the system gained through basic system engineering concepts and practices. Although it fights jointly, DOD does not always procure new systems jointly. Instead, each service typically develops and procures systems independently. Opportunities exist for different services to collaborate on the development of similar weapon systems as a means for creating a more efficient and affordable way of providing new capabilities to the warfighter. GAO identified development programs where program managers and services are working together to gain these efficiencies and other programs that have less collaborative efforts and could lead to more costly stovepiped solutions. For example, the Navy and Army have collaborated successfully on the Fire Scout, but in contrast, the Air Force and Army have not been as collaborative on the Predator and Warrior systems, as they each currently plan unique solutions to their similar needs.
As discussed in our June 2010 report, both the Propane and Oilheat Acts specify three areas as mandatory functions and priorities for PERC’s and NORA’s programs and projects.follows: The three mandatory areas are as Research and development: The Propane Act requires PERC to develop programs that provide for research and development of clean and efficient propane utilization equipment. The Oilheat Act directs similar oilheat-related research and development and directs NORA to fund projects in the demonstration stage of development. According to our analysis of PERC’s audited financial statements, annual reports, and other financial information that they provided to us for our June 2010 report, of the about $350.6 million collected from assessments for 1998 to 2008, PERC and its affiliated state propane associations spent $28.1 million for research and development (8 percent). According to our analysis of NORA’s audited financial statements, annual reports, and other financial information, of the more than $107.4 million NORA collected from assessments for 2001 to 2008, NORA and the affiliated state associations spent $6.2 million (5.8 percent) on research and development. Safety and training/education and training: Both the Propane Act and the Oilheat Act require development of programs to enhance consumer and employee safety and training. PERC refers to this program area as “safety and training,” and NORA refers to it as “education and training.” Projects that fall into this spending category include developing employee training materials and conducting training courses for industry personnel. According to our analysis of PERC’s and NORA’s audited financial statements, annual reports, and other financial information that they provided to us for our June 2010 report, from 1998 to 2008, PERC and its affiliated state propane associations spent $50.7 million for safety and training (14.5 percent), and from 2001 to 2008, NORA and the affiliated state associations spent $17.8 million (16.5 percent) on education and training. Public/consumer education: The Propane Act directs PERC to develop projects to inform and educate the public about safety and other issues associated with the use of propane. Similarly, the Oilheat Act directs NORA to develop programs that provide information to assist consumers and other persons in making evaluations and decisions regarding oilheat. Such activities have included the development of radio, television, and print advertising directed at consumers and industry professionals. According to our analysis of PERC’s and NORA’s audited financial statements, annual reports, and other financial information that they provided to us for our June 2010 report, from 1998 through 2008, PERC spent about $178.6 million for consumer education (50.9 percent), and from 2001 through 2008, NORA spent $68.4 million for consumer education (63.7 percent). To fund their operations, PERC and NORA assess each gallon of propane or heating oil, respectively. Specifically, to fund PERC operations, each gallon of odorized propane gas sold is assessed $0.004. To fund NORA operations, each gallon of heating oil sold is assessed $0.002. While the Propane and Oilheat Acts contain certain restrictions on the types of activities PERC and NORA can undertake, the acts generally do not prohibit PERC and NORA from conducting programs or projects beyond these mandatory areas identified above, and both organizations have carried out additional activities. PERC, for example, has spent funds on agriculture and engine fuel programs. In addition, to coordinate its activities with other parties, as required by the Propane Act, PERC has established an industry programs area to provide support, data, and other services to the propane industry and maximize its impact. Likewise, in 2004 and 2005, NORA funded an oil tank training and education program for tank installers, inspectors, and insurers to address concerns about storage tanks, which NORA officials stated spanned all three mandatory areas in the statute. In our June 2010 report, we found that some PERC and NORA activities appeared to meet the requirements of the acts, but certain other activities, such as some activities involving Congress or politically affiliated entities, raised issues as to whether they were covered by the Propane and Oilheat Acts’ specific lobbying restrictions. Even when we assumed that these activities were permitted, issues remained about whether Congress anticipated that assessment funds would be used for these types of activities. We also found issues concerning PERC’s funding of consumer education activities after spending restrictions were triggered and NORA’s monitoring of the expenditures of its funds by state associations. The Propane Act prohibits the use of PERC assessment funds for certain “lobbying” activities, specifically for “influencing legislation or elections,” except for recommending to the Secretary of Energy any changes in the act or other statutes that would further the act’s purposes. At the time of our June 2010 review, the Oilheat Act contained similar provisions, some of which were amended in 2014.NORA’s activities––particularly communications and expenditures related to Congress or to politically affiliated entities––raised issues as to whether they were covered under the acts’ lobbying restrictions. However, some of PERC’s and In our June 2010 report, we found, for example, that PERC paid for a grantee to attend activities associated with the Republican and Democratic national conventions, for a grantee to contribute thousands of dollars to several politically active organizations, and for a grantee to spend thousands of dollars to host Senate and House receptions. We also found that minutes of an August 2008 NORA executive committee meeting indicated that the NORA President said he was seeking state senators’ support for NORA reauthorization and that a December 2008 NORA-qualified Massachusetts state association newsletter indicated that the NORA President traveled to Washington to urge both Massachusetts senators to support NORA reauthorization. We found that neither the Propane Act nor the Oilheat Act provided guidance on what constitutes “influencing legislation or elections;” there was little pertinent legislative history; no court had addressed what this language means as used in these statutes; and other federal laws containing similar language had been interpreted in different ways. As such, it was not clear whether or not the Propane Act’s or the Oilheat Act’s prohibitions covered those types of activities. Even when we assumed PERC’s and NORA’s activities were permitted, issues remained about whether Congress anticipated that the assessment funds would be used for these activities, particularly when PERC and NORA had classified this spending as “consumer education,” one of the functions that the acts require PERC and NORA to carry out. Specifically, issues remained about whether they qualified as “consumer education” under the acts. Issues also remained about whether Congress anticipated that such a high proportion of the groups’ funding would go to consumer education activities (i.e., more than half), in comparison to the relatively little support given to research and development (i.e., 8 percent for PERC and less than 6 percent for NORA), which had been a key area of congressional interest as the laws were debated prior to enactment. Because PERC’s and NORA’s authorizing statutes did not provide for a particular funding level for specific activities or indicate a ranking among the activities designated as priorities, they afforded PERC and NORA wide latitude in deciding how and in what amounts they spent assessments collected. Since the legislative history of both statutes indicated that a need for research and development funding was a key factor driving the legislation, PERC’s and NORA’s decisions to spend over half of their funding on consumer education raised issues about whether these funds were being used as Congress anticipated. Furthermore, while some PERC and NORA activities appeared to meet statutory requirements, the lack of specificity in the language of the statutes raised issues about what activities were covered under certain we suggested that as provisions of the acts. In our June 2010 report,Congress considers whether to reauthorize NORA or amend PERC’s authorizing statute, it consider imposing greater specificity on the requirements it has established and to establish mechanisms to enhance compliance with those requirements. Specifically, we suggested that Congress should consider specifying any prioritization of activities it wants to be undertaken and detailing more specifically which activities are prohibited (such as some of those involving lobbying) and subjecting PERC’s and NORA’s activities to review, interpretation and approval by an independent, designated entity and specifying a federal oversight role by requiring DOE to monitor and oversee the expenditure of PERC and NORA funds. In our June 2010 report, we also found that PERC initially designated certain activities as “consumer education” but, when price-based restrictions on consumer education programs were triggered in 2009, it redesignated and continued the activities as “residential and commercial” matters. The Propane Act specified that if the 5-year average rolling price index of consumer grade propane exceeds a particular price threshold, PERC’s activities must be restricted to research and development, training, and safety. Commerce notified PERC in August 2009 that this price composite index threshold had been exceeded. We found that, after the August notification, PERC approved three grants, including a no-cost change order to a previously approved grant. These grants initially had been proposed and approved as consumer education grants, which would be prohibited under the restriction, and PERC amended their designation to a new program area called “residential and commercial” matters. The Propane Act did not specifically define the scope of activities permitted under the price restriction or the activities that must cease. We concluded that the resulting lack of a precise statutory line between permitted and prohibited activities created difficulty in assessing compliance with the restriction. In our June 2010 report, we also found issues concerning NORA’s monitoring of state oilheat associations. It was unclear during our review whether NORA’s monitoring procedures were adequate to detect noncompliance among its state grantees if it occurred. The Oilheat Act required NORA to monitor the use of funds it provides to state associations and impose any terms and conditions it considers necessary The Oilheat Act also required NORA to ensure compliance with the act.to establish policies and procedures that conform to generally accepted accounting principles (GAAP) for auditing compliance with the act. According to NORA’s President, monitoring of state associations included, among other things, policies and procedures to review state grants and disbursements and requirements in grant agreements with the state associations that specify the authorized and unauthorized use of NORA assessment funds. However, based on our review of general ledger entries, financial statements, and certain other reports and information prepared by selected state associations for our June 2010 report, we were unable to determine whether spending by state associations of NORA funds met the requirements of the Oilheat Act. For example, based on our review of the general ledger expenditures entries for 2006 to 2008, we found that hundreds of entries indicated only that a purchase was made, with no details as to the type of or reason for the purchase. We suggested that Congress consider requiring NORA funds granted to qualified state associations be segregated in separate accounts, apart from other funds collected and used by those associations. Consistent with this suggestion, in NORA’s 2014 reauthorization, the Oilheat Act was amended so that as a condition of receiving funds from NORA, each state association is to deposit the funds in an account that is separate from other funds of the qualified state association. In our June 2010 report, we found that the federal oversight of PERC and NORA was limited, which was in contrast to the routine federal oversight of agriculture check-off programs that had existed. DOE did not exercise its oversight authority for either PERC or NORA, and DOE officials told us that they believed that the department had no oversight role regarding either one. DOE, however, is empowered to review both organizations’ annual budgets; to recommend activities and programs it deems appropriate; and, in PERC’s case, to require submission of reports on compliance, violations, and complaints regarding implementation of the Propane Act. We found that although DOE is authorized to be reimbursed by PERC for the department’s PERC-related oversight costs (up to the average salary of two DOE employees), DOE officials told us the department never requested reimbursement because it did not incur any oversight costs. This lack of oversight is part of a long-standing pattern. For example, in a June 2003 report, we found that DOE’s oversight of PERC was lacking and recommended that the department take corrective action.Commerce rather than DOE had oversight responsibility and, therefore, DOE did not act on our recommendation. In June 2010, we found that DOE’s position regarding PERC remained unchanged. In its response to our June 2003 report, DOE stated that Importantly, as neither the Propane nor the Oilheat Act contained a specific enforcement mechanism for any potential PERC or NORA violations, in June 2010 we concluded that any oversight program implemented by a federal agency would be hampered. As a result, in our we suggested that as Congress considers whether to June 2010 report,reauthorize NORA or amend PERC’s authorizing statute, it may wish to impose greater specificity on the requirements it has established and to establish mechanisms to enhance compliance with those requirements. Specifically, we suggested that Congress should consider establishing a specific enforcement mechanism, and expressly authorizing DOE to refer any potential violations of law to appropriate enforcement authorities. Consistent with our suggestion, when Congress reauthorized NORA in 2014, it amended the Oilheat Act to, among other things, provide that in the event of a violation of the act, the Secretary of Energy is to notify Congress of the noncompliance and provide notice of the noncompliance With respect to PERC, although Congress made on NORA’s website.technical clarifications to the Propane Act in 2014 as noted above, it did not address this suggestion we made in 2010. As discussed earlier, Congress has authorized check-off programs for agriculture commodities, such as beef, blueberries, cotton, dairy products, eggs, peanuts, popcorn, pork, and potatoes, and USDA has mandated oversight activities over these programs. advertising campaigns, and investment plans, the AMS, among other things, ensures that the collection, accounting, auditing, and expenditure of promotion funds is consistent with the enabling legislation and department orders. Chairman Burgess, Ranking Member Schakowsky, and Members of the Committee, this completes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or by e-mail at ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Marya Link, Alison O’Neill, Kiki Theodoropoulos, Susan Sawtelle, and Barbara Timmerman made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congressionally authorized research and promotion programs, also known as check-off programs, may be established under federal law at the request of their industries. These programs are designed to increase the success of businesses that produce and sell certain commodities, such as milk and beef. To fund such programs, producers set aside a fraction of the wholesale cost of a product and deposit the monies into a common fund. In June 2010, GAO reported on two such programs, PERC and NORA, for the propane and oilheat industries (GAO-10-583). Legislation is currently being considered for a check-off program for the concrete and masonry industry. For this testimony, GAO focused on (1) mandatory functions and priorities for PERC and NORA programs and projects, (2) whether PERC and NORA activities were covered by certain legal requirements, and (3) federal oversight of PERC and NORA. For the 2010 report, GAO reviewed relevant laws, financial statements, annual reports, meeting minutes, and other reports. GAO also interviewed officials from the departments of Energy and Commerce and the private sector. GAO updated legislative information in July 2015. GAO's June 2010 report suggested that Congress consider clarifying certain requirements and specifying priority ranking, expenditures, and a DOE oversight role. When Congress reauthorized NORA in 2014, it amended the Oilheat Act by, among other things, taking actions on GAO's suggestions. As GAO reported in June 2010, the Propane Education and Research Act of 1996 (the Propane Act) and the National Oilheat Research Alliance Act of 2000 (the Oilheat Act), which authorized the establishment of the Propane Education and Research Council (PERC) and National Oilheat Research Alliance (NORA), specified the following areas as mandatory functions and priorities: Research and development : The Propane Act requires PERC to develop programs for research and development of clean and efficient propane utilization equipment. The Oilheat Act directs similar oilheat-related research and development and directs NORA to fund demonstration projects. Safety and training/education and training : Both acts require development of programs to enhance consumer and employee safety and training. PERC refers to this area as “safety and training,” and NORA refers to it as “education and training.” Public/consumer education : The Propane Act directs PERC to develop projects to inform and educate the public about safety and other issues associated with the use of propane. Similarly, the Oilheat Act directs NORA to develop programs that provide information to assist consumers and other persons in making evaluations and decisions regarding oilheat. Such activities have included developing radio, television, and print advertising. To fund their operations, the acts require PERC and NORA to assess each gallon of odorized propane gas or heating oil sold at $0.004 and $0.002, respectively. GAO found that some PERC and NORA activities appeared to meet the requirements of the acts, but certain other activities raised issues. For example, activities involving Congress or politically affiliated entities raised issues about whether they were covered by the acts' specific lobbying restrictions. Even if these activities were permitted, issues remained about whether Congress anticipated that assessment funds would be used to fund them, particularly when PERC and NORA classified this spending as “consumer education”—one of the functions required by the acts. Other issues GAO identified related to whether Congress anticipated that PERC and NORA would allocate the majority of their funding to education activities over the past decade (more than half), while allocating relatively little financial support to research and development (8 percent for PERC and less than 6 percent for NORA). When the laws were debated and before they were enacted, research and development had been a key area of congressional interest and ultimately was reflected as both a mandatory “function” and a high-focus “priority” in the final version passed by Congress GAO found limited federal oversight of PERC and NORA. As of June 2010, the Department of Energy had not used the oversight authority granted by the Propane and Oilheat acts, such as by reviewing budgets or making recommendations to PERC and NORA, as authorized by law. This lack of oversight was long-standing. For example, in a 2003 report GAO had found that DOE's oversight of PERC was lacking and recommended corrective action.
Within DOD’s Office of the Assistant Secretary of Defense for Health Affairs, the office of Force Health Protection & Readiness provides support for all medically related DOD policies, programs, and activities. This includes responsibility for the development of most deployment- related health policies, including those related to OEHS. The service component commands include U.S. Army Central (USARCENT), U.S. Air Force Central Command, and U.S. Naval Forces Central Command. that unit. As of February 1, 2015, there was one preventive medicine unit available to conduct OEHS activities in Iraq and one in Afghanistan, which reported to USARCENT and U.S. Forces-Afghanistan, respectively. Each of the military services has a public health surveillance center that provides support and technical guidance on reporting on potential environmental risks to combatant commands and their subordinate commands. These public health surveillance centers include the U.S. Army Public Health Command, the U.S. Air Force School of Aerospace Medicine, and the Navy & Marine Corps Public Health Center. In addition, these centers provide technical expertise and support for the subordinate commands’ preventive medicine units in theater. These surveillance centers have also developed and adapted military exposure guidelines for deployment using existing U.S. national standards for human health exposure limits and technical monitoring procedures (e.g., standards of the U.S. Environmental Protection Agency and the National Institute for Occupational Safety and Health). As part of DOD’s health surveillance framework, Force Health Protection & Readiness deployment health policies require the preventive medicine units to regularly collect and report a variety of data during deployments to identify and respond to health threats that servicemembers may have encountered. Health surveillance during deployments includes identifying the deployed population at risk, recognizing and assessing potentially hazardous health exposures and conditions, employing specific preventive countermeasures, daily monitoring of real-time health outcomes, and reporting of disease and injury data. DOD collects and stores three types of data during deployments: (1) daily individual servicemember location data—such as the duty station—which is stored by DOD’s Defense Manpower Data Center; (2) medical data— including data on health outcomes acquired from servicemember medical records—which is stored by DOD’s Armed Forces Health Surveillance Center; and (3) OEHS data, including ambient air, water, and soil samples. Once collected, DOD currently uses two separate IT systems for the storage of OEHS data—MESL and DOEHRS: MESL, originally implemented in 2003, contains both classified and unclassified documents that have been scanned and uploaded into the system. Officials can use MESL for submitting OEHS documents, conducting searches based on key words, and downloading OEHS documents. DOD subsequently began implementing DOEHRS in 2006. Unlike MESL, DOEHRS is a database that incorporates additional functionalities including OEHS data collection, management, and assessment—including the ability to query data—in a single system. It allows users in theatre to capture field data, such as air, water, and soil samples; compare sample results with exposure guidelines; report sampling data; and view laboratory data instantly once data are loaded into the system. Unlike MESL, DOEHRS only contains unclassified data. While there are some departmental requirements for some OEHS reports for deployment sites—such as an initial assessment—the total number of OEHS reports for each deployment site varies because these reports reflect the specific occupational and environmental circumstances unique to each location. VA is comprised of three administrations, two of which focus on benefits for veterans: VBA and VHA. VBA processes veterans’ claims for monthly compensation for service-connected disabilities. VBA determines eligibility criteria for disability compensation, which can include identifying specific health conditions, like those associated with occupational and environmental hazards in Iraq and Afghanistan, and determining whether these conditions are service-connected. VHA operates the VA health care system, which provides health care to veterans through its various facilities, including outpatient clinics and medical centers. VHA also may provide health coverage to spouses, survivors, and children of veterans who are permanently and totally disabled from a service-connected disability, or who died in the line of duty or from a service-connected disability. VHA also conducts or sponsors research on veterans’ illnesses related to military occupational and environmental exposures. Inconsistencies between DOD and military service-specific policies regarding OEHS data storage has led to fragmentation and duplication of OEHS data between the department’s two systems—MESL and DOEHRS. Officials with Force Health Protection & Readiness told us that DOD is transitioning from the use of MESL to DOEHRS, which has greater functionality. However, the departmental policy developed by this office—updated in 2011—states that all classified and unclassified OEHS data should be stored in MESL even though DOEHRS was implemented in 2006. Officials from the U.S. Army Public Health Command—the office that maintains MESL and has technical expertise in DOEHRS— confirmed the department’s intent to transition from MESL to DOEHRS, and told us that this transition would eventually include the transfer of all unclassified documents currently in MESL to DOEHRS (once DOEHRS had been sufficiently upgraded) while classified data would remain in MESL. However, these same officials told us that resources have not been obtained to complete this task, and that the upgrades are time- and resource-intensive. Force Health Protection & Readiness officials explained that DOD’s current policy does not reflect the potential transition from MESL to DOEHRS because developing the functionality of DOEHRS in archiving data from deployments was still under way when the policy was last updated in 2011—about 5 years after DOEHRS was implemented.being revised to require the storage of unclassified OEHS data in DOEHRS, and they expect the updated policy to be released in 2016. However, these officials told us that the policy is currently Further, when we reviewed all of DOD’s relevant OEHS policies as well as corresponding policies developed by each of the military services—all dated after 2006—we identified inconsistencies about which system should be used to store OEHS data. Only 3 of the 14 policies we reviewed instruct officials to store OEHS data in DOEHRS, while 4 policies instruct officials to store OEHS data in MESL. Six of the policies instruct the use of both systems as appropriate, depending on the type of document being submitted and the availability of DOEHRS during a deployment. Further, one of the policies does not list either system, as it says that databases are necessary for OEHS data storage but does not specifically mention DOEHRS or MESL. See Table 1 for the list of 14 policies related to OEHS data storage. Inconsistent policies are contrary to federal standards for internal control, which state that management should have policies in place that are both appropriate and clear. MESL and DOEHRS store similar types of unclassified OEHS data, such as environmental sample data (e.g., water and soil samples) and risk assessments for deployment sites. (See table 2.) Our prior work has found that inefficiencies can occur when there is fragmentation and duplication, such as when more than one system offers the same services.store unclassified OEHS data, officials’ efforts to store these data are inefficient and have resulted in both fragmentation and duplication. This has occurred because, in some cases, similar types of unclassified OEHS Without consistent policies on which system should be used to data have been submitted to both MESL and DOEHRS, and in other cases, identical OEHS data has been submitted to both systems. However, neither system serves as a central repository for these data. As a result, it is difficult to identify complete and comprehensive data sets for specific types of OEHS data, which may lead to problems when officials attempt to use these data in the future. A U.S. Army Public Health Command official who has technical expertise in both systems confirmed that there is duplication of OEHS data storage, but stated that there is no reasonable way to determine the extent because only DOEHRS has specific data level querying capabilities. Therefore, this could only be determined by comparing individual documents, which is not feasible because there are thousands of records in each system. DOD’s department-wide policy that governs the collection and storage of OEHS data during deployments does not specifically address quality assurance for OEHS data. As a result, the military services lack specific and consistent guidance on performing quality assurance reviews, which may impact the reliability of these data. According to military service officials, quality assurance reviews of OEHS data collected at deployment sites are generally performed once the data have been submitted to DOEHRS. As a result, it is not always feasible to verify the accuracy of the samples, and the quality assurance reviews are focused more on the completeness and reasonableness of data entry. Officials may perform these quality assurance checks while at a deployment site. However, the services’ public health surveillance centers in the United States may also conduct these reviews. According to officials, quality assurance reviews are limited to OEHS data that have been submitted to DOEHRS because MESL only contains uploaded documents that cannot be edited or modified. Data in DOEHRS can be reviewed for quality assurance purposes after it has been submitted, and the system has the capability of documenting that a quality assurance process has occurred. A Force Health Protection & Readiness official told us that the department’s deployment health policy was focused on monitoring the implementation of the policy as a whole and did not specifically address quality assurance for OEHS data in DOEHRS. As a result, some of the military services developed their own guidance, which has resulted in inconsistent approaches and levels of effort. The Army’s policy that discusses quality assurance for OEHS data states that data should be collected to ensure reliability and completeness, but does not discuss how these data should be reviewed. U.S. Army Public Health Command officials told us that they check OEHS data submissions in DOEHRS for completion in all relevant data fields and for general reasonableness of the data. The Air Force’s policy that addresses quality assurance for OEHS data states that procedures for quality assurance should be developed, but does not give any specifics as to how this should be done. Officials from the Air Force Medical Support Agency—an entity that provides guidance for U.S. Air Force Central Command on how to execute Air Force policy—told us that they did not think that quality assurance of OEHS data in DOEHRS collected by the Air Force was occurring to any large extent. A Navy & Marine Corps Public Health Center official told us that the Navy and Marine Corps do not have policies on quality assurance for OEHS data. Despite having no policy, however, Navy & Marine Corps Public Health Center officials told us that they check for completion and general reasonableness of the data. These inconsistent quality assurance processes reduce DOD’s ability to be confident about the reliability of OEHS data—especially because these reviews do not always occur—and potentially limit the usefulness of these data for monitoring the health of servicemembers and veterans. This runs counter to federal standards for internal control, which state that management should continually monitor information captured and maintained for several factors, including its reliability. The military services use site assessments to identify and address potential occupational and environmental health risks at deployment locations. In 2007, DOD established the Occupational and Environmental Health Site Assessment (OEHSA) as the standardized process for assessing and reporting occupational and environmental health site conditions at deployment sites. While the OEHSA is DOD’s primary and most comprehensive process for assessing risks, other methods may also be used, as needed. OEHSAs are completed by preventive medicine units and generally include the initial evaluation of the following conditions with continued surveillance as deemed necessary: ambient air, soil, water, radiological surveys, noise, occupational health hazards, waste OEHSAs also may contain location-specific disposal, and sanitation.findings and can include recommended countermeasures for mitigating these potential occupational and environmental health threats. These recommendations may include a wide range of activities, such as specifying any necessary education for servicemembers on how to identify potential health threats, the use of personal protective equipment to minimize exposure, or the need for new or continued testing of ambient air or water. In our review of 50 OEHSAs for sites in Iraq or Afghanistan, we found that almost all (46 of 50 OEHSAs) contained recommendations for mitigating potential health risks for servicemembers in these locations. For example, a 2011 OEHSA for a site in Afghanistan found that particulate matter concentrations exceeded military guidelines, and consequently, air quality was at moderate risk for a significant health concern. As a result, the OEHSA recommended, among other things, that personal protective equipment be worn during periods when conditions were very dusty and that methods to suppress dust be instituted, such as using large rocks and gravel in high-traffic areas within a deployment site. In another example, a 2011 OEHSA for a site in Iraq noted that burn-pit exposure posed a moderate risk. As a result, the OEHSA recommended additional burn-pit air monitoring during the next assessment and that servicemembers fueling the burn pit wear respiratory personal protective equipment. Officials from USARCENT and U.S. Forces-Afghanistan told us that their respective Force Health Protection Officers ensure that OEHSAs are being completed—as required by CENTCOM policy—and perform quality assurance reviews for completion and reasonableness.USARCENT’s Force Health Protection Officer told us that they check OEHSAs for completeness and reasonableness by reviewing relevant historical data documented in DOEHRS and by using their subject matter expertise. U.S. Forces-Afghanistan officials told us that their Force Health Protection Officer uses similar methods to monitor the completion of OEHSAs. In addition, the CENTCOM official responsible for OEHS has recently begun holding and documenting monthly meetings with the For example, service component commands to further monitor OEHS activities, including the completion of OEHSAs for deployment sites. Although OEHSAs identify recommended countermeasures to mitigate some potential occupational and environmental health risks, the extent to which these recommendations are being implemented is unclear due to a lack of documentation and monitoring. According to CENTCOM policy, base commanders are responsible for using completed OEHSAs to make decisions about local deployment health activities. This could include, for example, whether and how to implement recommendations to mitigate identified risks. However, this policy does not require that base commanders document their decisions in these instances, and as a result, USARCENT and U.S. Forces-Afghanistan officials told us that they could not readily identify whether documentation of these decisions exists. In contrast, DOD’s policy for its safety and occupational health program requires that the department’s components, including the combatant commands, establish procedures to ensure that decisions related to risk management are documented, archived, and reevaluated on a recurring basis. Furthermore, clear and appropriate documentation is a key component of internal controls, according to the federal standards for internal control. Without such documentation, it is difficult to determine whether and how recommendations identified in site assessments are being implemented. USARCENT and U.S. Forces- Afghanistan officials told us that they are not monitoring the implementation of these recommendations, and instead rely on preventive medicine units to elevate any concerns about implementing these recommendations through the chain of command, as necessary. Nonetheless, CENTCOM is ultimately responsible for overseeing the coordination of force health protection programs and compliance with all DOD requirements within its area of responsibility. CENTCOM officials also told us that they are not monitoring the implementation of recommendations contained in OEHSAs because they, too, rely on preventive medicine units to elevate concerns as needed. However, CENTCOM’s policy on risk management requires the implementation of internal controls and, subsequently, the supervision and evaluation of the effectiveness of these controls. Similarly, federal standards for internal control note that once an agency has identified areas of risk, it should determine how to manage the risk and what actions should be taken. Without relevant documentation on whether OEHSAs’ recommendations are being implemented, neither the combatant command nor the subordinate commands can readily determine the extent to which potential health risks for deployed servicemembers are being mitigated in Iraq or Afghanistan. Despite the ongoing collection of OEHS data, both DOD and VA have used OEHS data to a limited extent to address post-deployment health conditions. Force Health Protection & Readiness officials told us that the primary limitation with OEHS data collected during deployments continues to be the inability to capture individual servicemembers’ exposures—an issue that we reported on in 2012. These officials told us that this limitation has impeded the department’s ability to evaluate or treat some servicemembers who have been exposed to occupational or environmental hazards. To address this issue, DOD officials have been conducting research on the use of dosimeters—sensor devices that individuals wear to monitor real-time exposure to hazardous materials— which would allow the collection of individual exposure data. However, the logistics of using dosimeters during deployments is complicated because current technology allows for a dosimeter to be used for a single known hazard, and in deployed settings, servicemembers may be exposed to multiple hazards and unknown exposures. U.S. Army Public Health Command officials also told us that the type of dosimeter used depends on the type of hazard that the servicemember would be exposed to and the method of exposure. During deployments, these factors are often unknown; therefore it is difficult to determine the type of dosimeter that should be used. Additionally, it may not always be feasible to have servicemembers wear dosimeters while performing deployment duties, and the resources needed for monitoring may not always be available, according to these officials. DOD also creates Periodic Occupational and Environmental Monitoring Summaries (POEMS), which summarize historical environmental health surveillance monitoring efforts and identify possible short- and long-term health risks at deployed locations. While these summaries do not represent confirmed exposures or individual exposure levels, they are an indication of possible exposures, which can inform diagnosis, treatment, and the determination of disability benefits. According to U.S. Army Public Health Command officials, POEMS are made available to medical providers to help inform diagnosis and treatment of servicemembers. However, these officials said that providers’ use of POEMS may be limited because they do not provide information on the types of illnesses that may result from an exposure. VBA officials told us that they have access to unclassified POEMS through a MESL website. A VBA official told us that disability officials have been made aware of this website through a newsletter distributed to benefit managers throughout the country. However, this official could not tell us the extent to which POEMS are used in making disability determinations. In September 2014, DOD implemented a new review process for the completion of POEMS, which includes making these records publicly available via a MESL As of March 2015, 23 of the approximately 40 POEMS from website.Iraq and Afghanistan have been made available. DOD officials told us that they intend to promote the website once all completed POEMS have been posted, although officials could not provide a timeline for this effort. VA also uses OEHS data to supplement information used to educate veterans on specific deployment issues, although its use of these data is limited. Specifically, VA’s Office of Public Health developed a website to help veterans learn about potential occupational and environmental In doing hazards during deployments and their potential health effects.so, VA’s Office of Public Health officials told us they develop the content of the website through environmental hazard information obtained from several sources, including the U.S. Environmental Protection Agency. These officials told us they may also use OEHS data about a specific incident to supplement their information. For example, the website includes detailed information about Gulf War veterans’ unexplained illnesses, including available benefits and links to relevant research. VA supplements this information with OEHS data about potential types of occupational hazards that Gulf War veterans may have experienced, such as exposures to industrial solvents and radiation. The Millennium Cohort Study is the largest prospective study designed to assess the long-term health effects of military service both during and after deployment. As part of this study, DOD and VA conducted another study that found that, among other things, a servicemember located within 3 miles of a documented burn pit was not significantly associated with increased risk for respiratory symptoms or conditions compared to those servicemembers located greater than 3 miles from a burn pit. See Smith et al., “The Effects of Exposure to Documented Open-Air Burn Pits on Respiratory Health Among Deployers of the Millennium Cohort,” Journal of Environmental Medicine, vol. 54, no. 6 (2012): 708-716. deployment to Iraq and Kuwait and the development of respiratory conditions post-deployment. However, neither study was able to identify a causal link between exposures to burn pits and respiratory conditions. Additionally, in its 2011 report, the Institute of Medicine was unable to determine whether certain long-term health effects are likely to result from exposure to burn pits during deployments because the studies it reviewed lacked the support to conclude the absence or existence of an association. As a result, it recommended that additional studies of health effects in veterans deployed to Iraq or Afghanistan were needed. Both VA and DOD have begun additional research as a result of this recommendation. Since 2003, DOD and VA have collaborated through the DHWG on deployment health issues related to occupational and environmental hazards. For example, the DHWG sponsored an initiative related to DOD’s work on the individual longitudinal exposure record (ILER), the intention of which is to provide linkages between different types of data— environmental monitoring, biomarkers of exposure, and troop locations and activity—and an individual’s medical records. As a result, the ILER would create a complete record of servicemembers’ exposures over the course of their careers, which could support disability benefits claims, as well as retrospective studies. In April and May 2014, DOD outlined the users of the data, the sources of the data, guidance on developing a system to meet the needs of those who would use an ILER, and its intended functions. DOD officials told us that the project is currently in the second phase of its pilot program, and they are 6 to 8 years away from completing its development. In 2013, the DHWG helped develop a data transfer agreement that was intended to facilitate the exchange of data identified to assist both departments in several ways, including making clinical decisions for care. One aspect of the agreement specifies that DOD is to transfer to VA lists of servicemembers or veterans determined by DOD to have had an actual or potential environmental, occupational, or military-related exposure that has the potential for adverse long-term health effects. According to VA officials, DOD has not been proactively providing these lists. However, a VA official told us that the department considers all deployed servicemembers to have had potential exposures, and as a result, they have been requesting lists of all servicemembers deployed to an area with a burn pit. Further, VA officials said that when they learn of an exposure incident and request data, DOD has generally fulfilled that request. In addition, VA officials told us that DOD sends data on servicemembers’ and veterans’ deployment status as requested by VA to verify eligibility of those who have signed up for its Burn Pit Registry. In August 2014, DOD officials established the Exposure Related Data Transfer Agreement Subgroup to improve the department’s efforts in proactively meeting the data transfer requirements. Specifically, the workgroup met in January 2015 to finalize its objectives, which are to determine the appropriateness and timeliness of sharing specific exposure-related data and information with VA and to develop recommendations for any medically relevant follow-up actions for servicemembers or veterans exposed to occupational and environmental health hazards. DOD officials did not have a time frame for when this work would be completed. In the wake of veterans’ unexplained illnesses such as from the Persian Gulf War, the collection of OEHS data has become increasingly important because of its critical role in helping to identify exposures and determine the causes of post-deployment health issues. Servicemembers deployed to Iraq and Afghanistan have continued to express concerns about health conditions they attribute to exposures during their deployment, and their eligibility to obtain certain benefits is based on whether these conditions can be connected to their military service. However, problems with the storage and quality assurance for OEHS data compromise the departments’ ability to use them in determining service-connections for specific health conditions, or in conducting other important research. Specifically, fragmentation and duplication in the storage of unclassified OEHS data—due to inconsistent policies across DOD and the military services—hinder the use of these data, as there is no single repository from which to retrieve them. Moreover, the reliability of OEHS data is also potentially problematic as the military services have developed varying approaches for quality assurance reviews in the absence of departmental guidance. Of additional concern, DOD does not know the extent to which recommended countermeasures for mitigating exposures to occupational and environmental health hazards are being implemented at deployment sites because CENTCOM does not require the documentation of these decisions. Consequently, CENTCOM and its subordinate commands— USARCENT and U.S. Forces-Afghanistan—do not proactively monitor the extent to which base commanders are mitigating potential health risks in Iraq or Afghanistan. The approach instead is to wait for concerns to be elevated—potentially putting servicemembers at risk for harmful exposures. While the health and well-being of our servicemembers and veterans is paramount, DOD’s and VA’s ability to prevent, diagnose, and study post-deployment health conditions is compromised without accessible and reliable information about risk mitigation activities and OEHS data. To eliminate the fragmentation and duplication in the storage of unclassified OEHS data we recommend that the Secretary of Defense determine which IT system—DOEHRS or MESL—should be used to store specific types of unclassified OEHS data, clarify the department’s policy accordingly, and require all other departmental and military-service- specific policies to be likewise amended and implemented to ensure consistency. To ensure the reliability of OEHS data, we recommend that the Secretary of Defense establish clear policies and procedures for performing quality assurance reviews of the OEHS data collected during deployment, to include verifying the completeness and the reasonableness of these data, and require that all other related military-service-specific policies be amended and implemented to ensure consistency. To ensure that potential occupational and environmental health risks are mitigated for servicemembers deployed to Iraq and Afghanistan, we recommend that the Secretary of Defense require CENTCOM to revise its policy to ensure that base commanders’ decisions on whether to implement risk mitigation recommendations identified in OEHSAs are adequately documented and consistently monitored by the appropriate command. We requested comments on a draft of this report from DOD and VA. Both departments provided written comments that are reprinted in appendixes I and II. DOD also provided technical comments that we incorporated as appropriate. In commenting on this draft, DOD concurred with all of our recommendations, and VA generally agreed with our conclusions. DOD also provided the following responses to each of our recommendations: In responding to the first recommendation to clarify the department’s policy on which IT system (DOEHRS or MESL) should be used to store specific types of unclassified OEHS data, DOD noted that it will clarify DOD Instruction 6490.03 in a subsequent revision and will issue appropriate guidance on the use of DOEHRS and MESL. DOD also noted that we identified fragmentation and duplication, but that our only evidence was the use of two systems to store data. However, our evidence of fragmentation and duplication was based on the storage of OEHS data, which results from the inconsistent use of two systems. Specifically, as stated in our report, in some cases, similar types of unclassified OEHS data have been submitted to both MESL and DOEHRS, which would result in fragmentation of data storage. In other cases, identical OEHS data has been submitted to both systems, resulting in duplicate data storage. Additionally, in response to our recommendation that DOD require all other departmental and military-service-specific policies to be likewise amended, DOD stated that once a new policy is published, the entire department, including the military services, revises related policies accordingly. As stated in our report, the other departmental and military service policies that are linked to the main DOD Instruction are inconsistent in identifying which system to use for storing OEHS data. Therefore, we believe that it is important for the department to ensure that the appropriate revisions are made to all related policies. In responding to the second recommendation to establish clear policies and procedures for performing quality assurance reviews of OEHS data collected during deployment, DOD noted that the implementation of such a process would be complex, in that it would need to include definitions of completeness and reasonableness, among other items. DOD also noted this would require additional resources. While we appreciate DOD’s comments, an appropriate level of quality assurance needs to be performed to ensure the reliability of the OEHS data being collected. Otherwise, these data will not be useful for the intended purposes. In responding to the third recommendation to require CENTCOM to revise its policy to ensure that base commanders' decisions on whether to implement risk-mitigation recommendations are documented and consistently monitored, DOD noted that DOD Instruction 6055.01 already requires this. Specifically, DOD noted that the Instruction requires that the DOD Components must establish procedures to ensure these decisions are documented, archived, and reevaluated on a recurring basis. However, as stated in our report, current CENTCOM policy does not require documentation of risk- mitigation decisions, as is required by DOD Instruction 6055.01. Further, CENTCOM officials told us that they are not monitoring the implementation of these recommendations. Therefore, we continue to believe that CENTCOM should revise its policy to align with Instruction 6055.01. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretaries of Defense and Veterans Affairs. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. In addition to the contact named above, Bonnie Anderson, Assistant Director; Amy Andresen; Jennie Apter; LaKendra Beard; Danielle Bernstein; Muriel Brown; Jacquelyn Hamilton; and Jeffrey Mayhew made key contributions to this report.
OEHS is the regular collection and reporting of occupational and environmental health hazard data that can be used to help prevent, treat, or control disease or injury. In 2005, GAO reported that DOD needed to make improvements with OEHS during deployments to address immediate and long-term health issues. GAO was asked to assess DOD's current OEHS efforts. This report examines (1) the extent to which the military services centrally store OEHS data and verify its reliability; (2) how, if at all, DOD identifies potential occupational and environmental health risks for sites in Iraq and Afghanistan, and to what extent these risks are mitigated; and (3) the extent to which DOD and VA use OEHS data to address post-deployment health conditions. GAO reviewed and analyzed DOD and military service policies on OEHS data storage and quality assurance, as well as policies related to conducting and monitoring assessments for deployment sites. GAO also interviewed DOD, military service, and VA officials, as well as groups representing servicemembers and veterans. Inconsistencies between Department of Defense (DOD) and military service-specific policies regarding occupational and environmental health surveillance (OEHS) data storage have contributed to fragmentation and duplication of OEHS data between two information technology systems—the Military Exposure Surveillance Library (MESL) and the Defense Occupational and Environmental Health Readiness System (DOEHRS). Not having consistent policies for which system should be used to store OEHS data is contrary to federal standards for internal control. As a result, officials' efforts to store these data have resulted in both fragmentation and duplication, which GAO's prior work has shown may result in inefficiencies. Correspondingly, in some cases, similar types of unclassified OEHS data are being submitted to both MESL and DOEHRS, and in other cases, identical OEHS data are being submitted to both systems. However, neither system serves as a central repository for these data, and as a result, it is difficult to identify complete and comprehensive OEHS data sets, which may lead to problems when officials attempt to use these data in the future. Additionally, DOD's policy for OEHS data does not specifically address quality assurance. Consequently, some of the military services have developed their own guidance, resulting in inconsistent approaches and levels of effort, which has reduced DOD's ability to be confident that the data are sufficiently reliable. Federal standards for internal control state that management should continually monitor information captured and maintained for several factors, including reliability. The military services use site assessments to identify and address potential occupational and environmental health risks at a deployment site. These assessments may include recommended countermeasures, such as the use of personal protective equipment. However, the extent to which these recommendations are being implemented is unclear because U.S. Central Command (CENTCOM)—the combatant command responsible for military operations in the geographic area that includes Iraq and Afghanistan—does not require base commanders to document their decisions on implementing them. Officials also said they are not monitoring these recommendations, and instead rely on others to elevate concerns, as necessary. In contrast, DOD's policy for its safety and occupational health program requires the department's components, including the combatant commands, to ensure that risk management decisions are documented and reevaluated. Federal standards for internal control also note that appropriate documentation is a key internal control activity and that agencies should monitor their activities for managing identified risks. Both DOD and the Department of Veterans Affairs (VA) have used OEHS data to a limited extent to address post-deployment health conditions. For example, DOD officials said that the primary limitation with OEHS data collected during deployments continues to be the inability to capture exposure data at the individual servicemember level, although a method to do so is currently being explored. Additionally, DOD and VA use OEHS data to conduct research that may help determine service connections for post-deployment health conditions, but it has been difficult for researchers to establish a causal link between exposures and specific health conditions. GAO recommends that DOD clarify its policies for the storage and quality assurance of OEHS data, and require other related policies to be amended accordingly. GAO also recommends that CENTCOM revise its policy to require adequate documentation and consistent monitoring of deployment risk mitigation activities. In commenting on the report, DOD concurred with GAO's recommendations, and VA generally agreed with the conclusions.
Under the Clean Water Act, EPA is responsible for publishing water quality criteria that establish thresholds at which contamination— including waterborne pathogens—may threaten human health. States are required to develop standards, or legal limits, for these pathogens by either adopting EPA’s recommended water quality criteria or other criteria that EPA determines are equally protective of human health. The states then use these pathogen standards to assess water quality at their recreational beaches. The BEACH Act amended the Clean Water Act to require the 35 eligible states and territories to update their recreational water quality standards using EPA’s 1986 criteria for pathogen indicators. In addition, the BEACH Act required EPA to (1) complete studies on pathogens in coastal recreational waters and how they affect human health, including developing rapid methods of detecting pathogens by October 2003, and (2) publish new or revised water quality criteria by October 2005, to be reviewed and revised as necessary every 5 years thereafter. The BEACH Act also authorized EPA to award grants to states, localities, and tribes to develop comprehensive beach monitoring and public notification programs for their recreational beaches. To be eligible for BEACH Act grants, states are required to (1) identify their recreational beaches, (2) prioritize their recreational beaches for monitoring based on their use by the public and the risk to human health, and (3) establish a public notification program. EPA grant criteria give states some flexibility on the frequency of monitoring, methods of monitoring, and processes for notifying the public when pathogen indicators exceed state standards, including whether to issue health advisories or close beaches. Although the BEACH Act authorized EPA to provide $30 million in grants annually for fiscal years 2001 through 2005, since fiscal year 2001, congressional conference reports accompanying EPA’s appropriations acts have directed about $10 million annually for BEACH Act grants and EPA has followed this congressional direction when allocating funds to the program. EPA has made progress implementing the BEACH Act’s provisions but has missed statutory deadlines for two critical requirements. Of the nine actions required by the BEACH Act, EPA has taken action on the following seven: Propose water quality standards and criteria—The BEACH Act required each state with coastal recreation waters to incorporate EPA’s published criteria for pathogens or pathogen indicators, or criteria EPA considers equally protective of human health, into their state water quality standards by April 10, 2004. The BEACH Act also required EPA to propose regulations setting forth federal water quality standards for those states that did not meet the deadline. On November 16, 2004, EPA published in the Federal Register a final rule promulgating its 1986 water quality standards for E. coli and enterococci for the 21 states and territories that had not adopted water quality criteria that were as protective of human health as EPA’s approved water quality criteria. According to EPA, all 35 states with coastal recreational waters are now using EPA’s 1986 criteria, compared with the 11 states that were using these criteria in 2000. Provide BEACH Act grants—The BEACH Act authorized EPA to distribute annual grants to states, territories, tribes and, in certain situations, local governments to develop and implement beach monitoring and notification programs. Since 2001, EPA has awarded approximately $51 million in development and implementation grants for beach monitoring and notification programs to all 35 states. Alaska is the only eligible state that has not yet received a BEACH Act implementation grant because it is still in the process of developing a monitoring and public notification program consistent with EPA’s grant performance criteria. EPA expects to distribute approximately $10 million for the 2007 beach season subject to the availability of funds. Publish beach monitoring guidance and performance criteria for grants—The BEACH Act required EPA to develop guidance and performance criteria for beach monitoring and assessment for states receiving BEACH Act grants by April 2002. After a year of consultations with coastal states and organizations, EPA responded to this requirement in 2002 by issuing its National Beach Guidance and Required Performance Criteria for Grants. To be eligible for BEACH Act grants, EPA requires recipients to develop (1) a list of beaches evaluated and ranked according to risk, (2) methods for monitoring water quality at their beaches, such as when and where to conduct sampling, and (3) plans for notifying the public of the risk from pathogen contamination at beaches, among other requirements. Develop a list of coastal recreational waters—The BEACH Act required EPA to identify and maintain a publicly available list of coastal recreational waters adjacent to beaches or other publicly accessible areas, with information on whether or not each is subject to monitoring and public notification. In March 2004, EPA published its first comprehensive National List of Beaches based on information that the states had provided as a condition for receiving BEACH Act grants. The list identified 6,099 coastal recreational beaches, of which 3,472, or 57 percent, were being monitored. The BEACH Act also requires EPA to periodically update its initial list and publish revisions in the Federal Register. However, EPA has not yet published a revised list, in part because some states have not provided updated information. Develop a water pollution database—The BEACH Act required EPA to establish, maintain, and make available to the public an electronic national water pollution database. In May 2005, EPA unveiled “eBeaches,” a collection of data pulled from multiple databases on the location of beaches, water quality monitoring, and public notifications of beach closures and advisories. This information has been made available to the public through an online tool called BEACON (Beach Advisory and Closing Online Notification). EPA officials acknowledge that eBeaches has had some implementation problems, including periods of downtime when states were unable to submit their data, and states have had difficulty compiling the data and getting it into EPA’s desired format. EPA is working to centralize its databases so that states can more easily submit information and expects the data reporting will become easier for states as they further develop their system. Provide technical assistance on floatable materials—The BEACH Act required EPA to provide technical assistance to help states, tribes, and localities develop their own assessment and monitoring procedures for floatable debris in coastal recreational waters. EPA responded by publishing guidance titled Assessing and Monitoring Floatable Debris in August 2002. The guidance provided examples of monitoring and assessment programs that have addressed the impact of floatable debris and examples of mitigation activities to address floatable debris. Provide a report to Congress on status of BEACH Act implementation— The BEACH Act required EPA to report to Congress 4 years after enactment of the act and every 4 years thereafter on the status of implementation. EPA completed its first report for Congress, Implementing the BEACH Act of 2000: Report to Congress in October 2006, which was 2 years after the October 2004 deadline. EPA officials noted that they missed the deadline because they needed additional time to include updates on current research and states’ BEACH Act implementation activities and to complete both internal and external reviews. EPA has not yet completed the following two BEACH Act requirements: Conduct epidemiological studies—The BEACH Act required EPA to publish new epidemiological studies concerning pathogens and the protection of human health for marine and freshwater by April 10, 2002, and to complete the studies by October 10, 2003. The studies were to: (1) assess potential human health risks resulting from exposure to pathogens in coastal waters; (2) identify appropriate and effective pathogen indicator(s) to improve the timely detection of pathogens in coastal waters; (3) identify appropriate, accurate, expeditious, and cost-effective methods for detecting the presence of pathogens; and (4) provide guidance for state application of the criteria. EPA initiated its multiyear National Epidemiological and Environmental Assessment of Recreational Water Study in 2001 in collaboration with the Centers for Disease Control and Prevention. The first component of this study was to develop faster pathogen indicator testing procedures. The second component was to further clarify the health risk of swimming in contaminated water, as measured by these faster pathogen indicator testing procedures. While EPA completed these studies for freshwater–– showing a promising relationship between a faster pathogen indicator and possible adverse health effects from bacterial contamination––it has not completed the studies for marine water. EPA initiated marine studies in Biloxi, Mississippi, in the summer of 2005, 3 years past the statutory deadline for beginning this work, but the work was interrupted by Hurricane Katrina. EPA initiated two additional marine water studies in the summer of 2007. Publish new pathogen criteria—The BEACH Act required EPA to use the results of its epidemiological studies to identify new pathogen indicators with associated criteria, as well as new pathogen testing measures by October 2005. However, since EPA has not completed the studies on which these criteria were to be based, this task has been delayed. In the absence of new criteria for pathogens and pathogen indicators, states continue to use EPA’s 1986 criteria to monitor their beaches. An EPA official told us that EPA has not established a time line for completing these two remaining provisions of the BEACH Act but estimates it may take an additional 4-5 years. One EPA official told us that the initial time frames in the act may not have been realistic. EPA’s failure to complete studies on the health effects of pathogens for marine waters and failure to publish revised water quality criteria for pathogens and pathogen indicators prompted the Natural Resources Defense Council to file suit against EPA on August 2, 2006, for failing to comply with the statutory obligations of the BEACH Act. To ensure that EPA complies with the requirements laid out in the BEACH Act, we recommended that it establish a definitive time line for completing the studies on pathogens and their effects on human health, and for publishing new or revised water quality criteria for pathogens and pathogen indicators. While EPA distributed approximately $51 million in BEACH Act grants between 2001 and 2006 to the 35 eligible states and territories, its grant distribution formula does not adequately account for states’ widely varied beach monitoring needs. When Congress passed the BEACH Act in 2000, it authorized $30 million in grants annually, but the act did not specify how EPA should distribute grants to eligible states. EPA determined that initially $2 million would be distributed equally to all eligible states to cover the base cost of developing water quality monitoring and notification programs. EPA then developed a distribution formula for future annual grants that reflected the BEACH Act’s emphasis on beach use and risk to human health. EPA’s funding formula includes the following three factors: Length of beach season—EPA selected beach season length as a factor because states with longer beach seasons would require more monitoring. Beach use—EPA selected beach use as a factor because more heavily used beaches would expose a larger number of people to pathogens, increasing the public health risk and thus requiring more monitoring. EPA used coastal population as a proxy for beach use because information on the number of beach visitors was not consistently available across all the states. Beach miles—EPA selected beach miles because states with longer shorelines would require more monitoring. EPA used shoreline miles, which may include industrial and other nonpublicly accessible areas, as a proxy for beach miles because verifiable data for beach miles was not available. Once EPA determined which funding formula factors to use, EPA officials weighted the factors. EPA intended that the beach season factor would provide the base funding and would be augmented by the beach use and beach mile factors. EPA established a series of fixed amounts that correspond to states’ varying lengths of beach seasons to cover the general expenses associated with a beach monitoring program. For example, EPA estimated that a beach season of 3 or fewer months would require approximately two full-time employees costing $150,000, while states with beach seasons greater than 6 months would require $300,000. Once the allotments for beach season length were distributed, EPA determined that 50 percent of the remaining funds would be distributed according to states’ beach use, and the other 50 percent would be distributed according to states’ beach miles, as shown in table 1. EPA officials told us that, using the distribution formula above and assuming a $30 million authorization, the factors were to have received relatively equal weight in calculating states’ grants and would have resulted in the following allocation: beach season—27 percent (about $8 million); beach use—37 percent (about $11 million); and beach miles—37 percent (about $11 million). However, because funding levels for BEACH Act grants have been about $10 million each year, once the approximately $8 million, of the total available for grants, was allotted for beach season length, this left only $2 million, instead of nearly $22 million, to be distributed equally between the beach use and beach miles factors. This resulted in the following allocation: beach season—82 percent (about $8 million); beach use—9 percent (about $1 million); and beach miles—9 percent (about $1 million). Because beach use and beach miles vary widely among the states, but account for a much smaller portion of the distribution formula, BEACH Act grant amounts may vary little between states that have significantly different shorelines or coastal populations. For example, across the Great Lakes, there is significant variation in coastal populations and in miles of shoreline, but current BEACH Act grant allocations are relatively flat. As a result, Indiana, which has 45 miles of shoreline and a coastal population of 741,468, received about $205,800 in 2006, while Michigan, which has 3,224 miles of shoreline and a coastal population of 4,842,023, received about $278,450 in 2006. Similarly, the current formula gives localities that have a longer beach season and significantly smaller coastal populations an advantage over localities that have a shorter beach season but significantly greater population. For example, Guam and American Samoa with 12 month beach seasons and coastal populations of less than 200,000 each receive larger grants than Maryland and Virginia, with 4 month beach seasons and coastal populations of 3.6 and 4.4 million, respectively. If EPA reweighted the factors so that they were still roughly equal given the $10 million allocation, we believe that BEACH Act grants to the states would better reflect their needs. Consequently, we recommended that if current funding levels remain the same, that the agency should revise the formula for distributing BEACH Act grants to better reflect the states’ varied monitoring needs by reevaluating the formula factors to determine if the weight of the beach season factor should be reduced and if the weight of the other factors, such as beach use and beach miles should be increased. States’ use of BEACH Act grants to develop and implement beach monitoring and public notification programs has increased the number of beaches being monitored and the frequency of monitoring. However, states vary considerably in the frequency in which they monitor beaches, the monitoring methods used, and the means by which they notify the public of health risks. Specifically, 34 of the 35 eligible states have used BEACH Act grants to develop beach monitoring and public notification programs; and the remaining state, Alaska, is in the process of setting up its program. However, these programs have been implemented somewhat inconsistently by the states which could lead to inconsistent levels of public health protection at beaches in the United States. In addition, while the Great Lakes and other eligible states have been able to increase their understanding of the scope of contamination as a result of BEACH Act grants, the underlying causes of this contamination usually remain unresolved, primarily due to a lack of funding. For example, EPA reports that nationwide when beaches are found to have high levels of contamination, the most frequent source of contamination listed as the cause is “unknown”. BEACH Act officials from six of the eight Great Lakes states that we reviewed—Illinois, Michigan, Minnesota, New York, Ohio, and Wisconsin—reported that the number of beaches being monitored in their state has increased since the passage of the BEACH Act in 2000. For example, in Minnesota, state officials reported that only one beach was being monitored prior to the BEACH Act, and there are now 39 beaches being monitored in three counties. In addition, EPA data show that, in 1999, the number of beaches identified in the Great Lakes was about 330, with about 250 being monitored. In 2005, the most recent year for which data are available, the Great Lakes states identified almost 900 beaches of which about 550 were being monitored. In addition to an increase in the number of beaches being monitored, the frequency of monitoring at many of the beaches in the Great Lakes has increased. We estimated that 45 percent of Great Lakes beaches increased the frequency of their monitoring since the passage of the BEACH Act. For example, Indiana officials told us that prior to the BEACH Act, monitoring was done a few times per week at their beaches but now monitoring is done 5-7 days per week. Similarly, local officials in one Ohio county reported that they used to test some beaches along Lake Erie twice a month prior to the BEACH Act but now they test these beaches once a week. States outside of the Great Lakes region have reported similar benefits of receiving BEACH Act grants. For example, state officials from Connecticut, Florida, and Washington reported increases in the number of beaches they are now able to monitor or the frequency of the monitoring they are now able to conduct. Because of the information available from BEACH Act monitoring activities, state and local beach officials are now better able to determine which of their beaches are more likely to be contaminated, which are relatively clean, and which may require additional monitoring resources to help them better understand the levels of contamination that may be present. For example, state BEACH Act officials reported that they now know which beaches are regularly contaminated or are being regularly tested for elevated levels of contamination. We determined that officials at 54 percent of Great Lakes beaches we surveyed believe that their ability to make advisory and closure decisions has increased or greatly increased since they initiated BEACH Act water quality monitoring programs. However, because EPA’s grant criteria and the BEACH Act give states and localities some flexibility in implementing their programs we also identified significant variability among the Great Lakes states beach monitoring and notification programs. We believe that this variability is most likely also occurring in other states as well because of the lack of specificity in EPA’s guidance. Specifically, we identified the following differences in how the Great Lake states have implemented their programs. Frequency of monitoring. Some Great Lakes states are monitoring their high-priority beaches almost daily, while other states monitor their high- priority beaches as little as one to two times per week. The variation in monitoring frequency in the Great Lakes states is due in part to the availability of funding. For example, state officials in Michigan and Wisconsin reported insufficient funding for monitoring. Methods of sampling. Most of the Great Lakes states and localities use similar sampling methods to monitor water quality at local beaches. For example, officials at 79 percent of the beaches we surveyed reported that they collected water samples during the morning, and 78 percent reported that they always collected water samples from the same location. Collecting data at the same time of day and from the same site ensures more consistent water quality data. However, we found significant variations in the depth at which local officials in the Great Lakes states were taking water samples. According to EPA, depth is a key determinant of microbial indicator levels. EPA’s guidance recommends that beach officials sample at the same depth—knee depth, or approximately 3-feet deep—for all beaches to ensure consistency and comparability among samples. Great Lakes states varied considerably in the depths at which they sampled water, with some sampling occurring at 1-6 inches and other sampling at 37-48 inches. Public notification. Local officials in the Great Lakes differ in the information they use to decide whether to issue health advisories or close beaches when water contamination exceeds EPA criteria and in how to notify the public of their decision. These differences reflect states’ varied standards for triggering an advisory, closure, or both. Also, we found that states’ and localities’ means of notifying the public of health advisories or beach closures vary across the Great Lakes. Some states post water quality monitoring results on signs at beaches; some provide results on the Internet or on telephone hotlines; and some distribute the information to local media. To address this variability in how the states are implementing their BEACH Act grant funded monitoring and notification programs, we recommended that EPA provide states and localities with specific guidance on monitoring frequency and methods and public notification. Further, even though BEACH Act funds have increased the level of monitoring being undertaken by the states, the specific sources of contamination at most beaches are not known. For example, we determined that local officials at 67 percent of Great Lakes’ beaches did not know the sources of bacterial contamination causing water quality standards to be exceeded during the 2006 beach season and EPA officials confirmed that the primary source of contamination at beaches nationwide is reported by state officials as “unknown.” For example, because state and local officials in the Great Lakes states do not have enough information on the specific sources of contamination and generally lack funds for remediation, most of the sources of contamination at beaches have not been addressed. Local officials from these states indicated that they had taken actions to address the sources of contamination at an estimated 14 percent of the monitored beaches. EPA has concluded that BEACH Act grant funds generally may be used only for monitoring and notification purposes. While none of the eight Great Lakes state officials suggested that the BEACH Act was intended to help remediate the sources of contamination, several state officials believe that it may be more beneficial to use BEACH Act grants to identify and remediate sources of contamination rather than just continue to monitor water quality at beaches and notify the public when contamination occurs. Local officials also reported a need for funding to identify and address sources of contamination. Furthermore, at EPA’s National Beaches Conference in October 2006, a panel of federal and academic researchers recommended that EPA provide the states with more freedom on how they spend their BEACH Act funding. To address this issue, we recommended that as the Congress considers reauthorization of the BEACH Act, that it should consider providing EPA some flexibility in awarding BEACH Act grants to allow states to undertake limited research to identify specific sources of contamination at monitored beaches and certain actions to mitigate these problems, as specified by EPA. _ _ _ _ _ In conclusion, Madam Chairwoman, EPA has made progress in implementing many of the BEACH Act’s requirements but it may still be several years before EPA completes the pathogen studies and develops the new water quality criteria required by the act. Until these actions are completed, states will have to continue to use existing outdated methods. In addition, the formula EPA developed to distribute BEACH Act grants to the states was based on the assumption that the program would receive its fully authorized allocation of $30 million. Because the program has not received full funding and EPA has not adjusted the formula to reflect reduced funding levels, the current distribution of grants fails to adequately take into account the varied monitoring needs of the states. Finally, as evidenced by the experience of the Great Lakes states, the BEACH Act has helped states increase their level of monitoring and their knowledge about the scope of contamination at area beaches. However, the variability in how the states are conducting their monitoring, how they are notifying the public, and their lack of funding to address the source of contamination continues to raise concerns about the adequacy of protection that is being provided to beachgoers. This concludes our prepared statement, we would be happy to respond to any questions you may have. If you have any questions about this statement, please contact Anu K. Mittal @ (202) 512-3841 or mittala@gao.gov. Other key contributors to this statement include Ed Zadjura (Assistant Director), Eric Bachhuber, Omari Norman, and Sherry McDonald. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Waterborne pathogens can contaminate water and sand at beaches and threaten human health. Under the Beaches Environmental Assessment and Coastal Health (BEACH) Act, the Environmental Protection Agency (EPA) provides grants to states to develop water quality monitoring and public notification programs. This statement summarizes the key findings of GAO's May 2007 report, Great Lakes: EPA and States Have Made Progress in Implementing the BEACH Act, but Additional Actions Could Improve Public Health Protection. In this report GAO assessed (1) the extent to which EPA has implemented the Act's provisions, (2) concerns about EPA's BEACH Act grant allocation formula, and (3) described the experiences of the Great Lakes states in developing and implementing beach monitoring and notification programs using their grant funds. EPA has taken steps to implement most BEACH Act provisions but has missed statutory deadlines for two critical requirements. While EPA has developed a national list of beaches and improved the uniformity of state water quality standards, it has not (1) completed the pathogen and human health studies required by 2003 or (2) published the new or revised water quality criteria for pathogens required by 2005. EPA stated that the required studies are ongoing, and although some studies were initiated in the summer of 2005, the work was interrupted by Hurricane Katrina. EPA subsequently initiated two additional water studies in the summer of 2007. According to EPA, completion of the studies and development of the new criteria may take an additional 4 to 5 years. Further, although EPA has distributed approximately $51 million in BEACH Act grants from 2001-2006, the formula EPA uses to make the grants does not accurately reflect the monitoring needs of the states. This occurs because the formula emphasizes the length of the beach season more than the other factors in the formula--beach miles and beach use. These other factors vary widely among the states, can greatly influence the amount of monitoring a state needs to undertake, and can increase the public health risk. Thirty-four of the 35 eligible states have used BEACH Act grants to develop beach monitoring and public notification programs. Alaska is still in the process of developing its program. However, because state programs vary they may not provide consistent levels of public health protection nationwide. GAO found that the states' monitoring and notification programs varied considerably in the frequency with which beaches were monitored, the monitoring methods used, and how the public was notified of potential health risks. For example, some Great Lakes states monitor their high-priority beaches as little as one or two times per week, while others monitor their high-priority beaches daily. In addition, when local officials review similar water quality results, some may choose to only issue a health advisory while others may choose to close the beach. According to state and local officials, these inconsistencies are in part due to the lack of adequate funding for their beach monitoring and notification programs. The frequency of water quality monitoring has increased nationwide since passage of the Act, helping states and localities to identify the scope of contamination. However, in most cases, the underlying causes of contamination remain unknown. Some localities report that they do not have the funds to investigate the source of the contamination or take actions to mitigate the problem, and EPA has concluded that BEACH Act grants generally may not be used for these purposes. For example, local officials at 67 percent of Great Lakes beaches reported that, when results of water quality testing indicated contamination at levels exceeding the applicable standards during the 2006 beach season, they did not know the source of the contamination, and only 14 percent reported that they had taken actions to address the sources of contamination.
DOD is one of the largest and most complex organizations in the world, and is entrusted with more taxpayer dollars than any other federal department or agency. For fiscal year 2013, the department requested approximately $613.9 billion—$525.4 billion in spending authority for its base operations and an additional $88.5 billion to support overseas contingency operations, such as those in Iraq and Afghanistan. In support of its military operations, DOD performs an assortment of interrelated and interdependent business functions, such as logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is overly complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. The department recently requested about $17.2 billion for its business systems environment and IT infrastructure investments for fiscal year 2013.department’s systems inventory, this environment is composed of about 2,200 business systems and includes 310 financial management, 724 human resource management, 580 logistics, 254 real property and installation, and 287 weapon acquisition management systems. DOD currently bears responsibility, in whole or in part, for 14 of the 30 areas across the federal government that we have designated as high risk. Seven of these areas are specific to the department, and 7 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas relate to DOD’s major business operations that are inextricably linked to the department’s ability to perform its overall mission. Furthermore, the high-risk areas directly affect the readiness and capabilities of U.S. military forces and can affect the success of a mission. In particular, the department’s nonintegrated and duplicative systems impair its ability to combat fraud, waste, and abuse. As such, DOD’s business systems modernization is one of the department’s specific high-risk areas and is an essential enabler in addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. The department’s approach to modernizing its business systems environment includes developing and using a BEA and associated enterprise transition plan, improving business systems investment management, and reengineering the business processes supported by its defense business systems. These efforts are guided by DOD’s Chief Management Officer and Deputy Chief Management Officer (DCMO). The Chief Management Officer’s responsibilities include developing and maintaining a departmentwide strategic plan for business reform and establishing performance goals and measures for improving and evaluating overall economy, efficiency, and effectiveness, and monitoring and measuring the progress of the department. The DCMO’s responsibilities include recommending to the Chief Management Officer methodologies and measurement criteria to better synchronize, integrate, and coordinate the business operations to ensure alignment in support of the warfighting mission. The DCMO is also responsible for developing and maintaining the department’s enterprise architecture for its business mission area. The DOD Chief Management Officer and DCMO are to interact with several entities to guide the direction, oversight, and execution of DOD’s business transformation efforts, which include business systems modernization. These entities include the Defense Business Systems Management Committee, which is intended to serve as the department’s highest-ranking investment review and decision-making body for business systems programs and is chaired by the Deputy Secretary of Defense. The committee’s composition includes the principal staff assistants, defense agency directors, DOD Chief Information Officer (CIO), and military department Chief Management Officers. Table 1 describes key DOD business systems modernization governance entities and their composition. Since 2005, DOD has employed a “tiered accountability” approach to business systems modernization. Under this approach, responsibility and accountability for business architectures and systems investment management are assigned to different levels in the organization. For example, the DCMO is responsible for developing the corporate BEA (i.e., the thin layer of DOD-wide policies, capabilities, standards, and rules) and the associated enterprise transition plan. Each component is responsible for defining a component-level architecture and transition plan associated with its own tiers of responsibility and for doing so in a manner that is aligned with (i.e., does not violate) the corporate BEA. Similarly, program managers are responsible for developing program- level architectures and plans and for ensuring alignment with the architectures and transition plans above them. This concept is to allow for autonomy while also ensuring linkages and alignment from the program level through the component level to the corporate level. Consistent with the tiered accountability approach, the NDAA for Fiscal Year 2008 required the Secretaries of the military departments to designate the department Under Secretaries as Chief Management Officers with primary responsibility for business operations. Moreover, the Duncan Hunter NDAA for Fiscal Year 2009 required the military departments to establish business transformation offices to assist their Chief Management Officers in the development of comprehensive business transformation plans.departments have designated their respective Under Secretaries as the Chief Management Officers. In addition, the Department of the Navy (DON) and Army have issued business transformation plans. Air Force officials have stated that the department’s corporate Strategic Plan also serves as its business transformation plan. DOD’s BEA is intended to serve as a blueprint for DOD business transformation. In particular, the BEA is to guide and constrain implementation of interoperable defense business systems by, among other things, documenting the department’s business functions and activities, the information needed to execute its functions and activities, and the business rules, laws, regulations, and policies associated with its business functions and activities. According to DOD, the BEA is being developed using an incremental approach, where each new release addresses business mission area gaps or weaknesses based on priorities identified by the department. The department considers its current approach to developing the BEA both a “top-down” and “bottom-up” approach. Specifically, it focuses on developing content to support investment management and strategic decision making and oversight (“top-down”) while also responding to department needs associated with supporting system implementation, system integration, and software development (“bottom-up”). The department’s most recent BEA version (version 9.0), released in March 2012, focuses on documenting information associated with its 15 end-to-end business process areas. (See table 2 for a list and description of these business process areas.) In particular, the department’s most recent Strategic Management Plan has identified the Hire-to-Retire and Procure-to-Pay business process areas as its priorities. According to the department, the process of documenting the needed architecture information also includes working to refine and streamline each of the associated end-to-end business processes. In addition, DOD’s approach to developing its BEA involves the development of a federated enterprise architecture. Such an approach treats the architecture as a family of coherent but distinct member architectures that conform to an overarching architectural view and rule set. This approach recognizes that each member of the federation has unique goals and needs, as well as common roles and responsibilities with the levels above and below it. Under a federated approach, member architectures are substantially autonomous, although they also inherit certain rules, policies, procedures, and services from higher-level architectures. As such, a federated architecture gives autonomy to an organization’s components while ensuring enterprisewide linkages and alignment where appropriate. Where commonality among components exists, there are also opportunities for identifying and leveraging shared services. Figure 1 provides a conceptual overview of DOD’s federated BEA approach. The certification of business system investments is a key step in DOD’s IT investment selection process that the department has aimed to model after GAO’s Information Technology Investment Management (ITIM) framework. While defense business systems with a total cost over $1 million are required, as of June 2011, to use the Business Capability Lifecycle,are also subject to the formal review and certification process through the IRBs before funds are obligated for them. a streamlined process for acquiring systems, these systems Under DOD’s current approach to certifying investments, there are several types of certification actions as follows: Certify or certify with conditions: An IRB certifies the modernization as fully meeting criteria defined in the act and IRB investment review guidance (certify) or imposes specific conditions to be addressed by a certain time (certify with conditions). Recertify or recertify with conditions: An IRB certifies the obligation of additional modernization funds for a previously-certified modernization investment (recertify) or imposes additional related conditions to the action (recertify with conditions). Decertify: An IRB may decertify or reduce the amount of modernization funds available to an investment when (1) a component reduces funding for a modernization by more than 10 percent of the originally certified amount, (2) the period of certification for a modernization is shortened, or (3) the entire amount of funding is not to be obligated as previously certified. An IRB may also decertify a modernization after development has been terminated or if previous conditions assigned by the IRB are not met. Congress included provisions in the act, as amended, that are aimed at ensuring DOD’s development of a well-defined BEA and associated enterprise transition plan, as well as the establishment and implementation of effective investment management structures and processes. The act requires DOD to develop a BEA and an enterprise transition plan for implementing the identify each business system proposed for funding in DOD’s fiscal year budget submissions, delegate the responsibility for business systems to designated establish an investment review structure and process, and not obligate appropriated funds for a defense business system program with a total cost of more than $1 million unless the approval authority certifies that the business system program meets specified conditions. The act also requires that the Secretary of Defense annually submit to the congressional defense committees a report on the department’s compliance with the above provisions. In addition, the act sets forth the following responsibilities: the DCMO is responsible and accountable for developing and maintaining the BEA, as well as integrating business operations; the CIO is responsible and accountable for the content of those portions of the BEA that support DOD’s IT infrastructure or information assurance activities; the Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible and accountable for the content of those portions of the BEA that support DOD’s acquisition, logistics, installations, environment, or safety and occupational health activities; the Under Secretary of Defense (Comptroller) is responsible and accountable for the content of those portions of the BEA that support DOD’s financial management activities or strategic planning and budgeting activities; and the Under Secretary of Defense for Personnel and Readiness is responsible and accountable for the content of those portions of the BEA that support DOD’s human resource management activities. Between 2005 and 2008, we reported that DOD had taken steps to comply with key requirements of the NDAA relative to architecture development, transition plan development, budgetary disclosure, and investment review, and to satisfy relevant systems modernization management guidance. However, each report also concluded that much remained to be accomplished relative to the act’s requirements and relevant guidance.areas. We made recommendations to address each of the In May 2009, we reported that the pace of DOD’s efforts in defining and implementing key institutional modernization management controls had slowed compared with progress made in each of the previous 4 years, leaving much to be accomplished to fully implement the act’s requirements and related guidance. In addition, between 2009 and 2011, we found that long-standing challenges we previously identified remained to be addressed. The corporate BEA had yet to be extended (i.e., federated) to the entire family of business mission area architectures, and the military departments had yet to address key enterprise architecture management practices and develop important content. Budget submissions included some, but omitted other, key information about business system investments, in part because of the lack of a reliable, comprehensive inventory of all defense business systems. The business system information used to support the development of the transition plan and DOD’s budget requests, as well as certification and annual reviews, was of questionable reliability. DOD and the military departments had not fully defined key practices (i.e., policies and procedures) related to effectively performing both project-level (Stage 2) and portfolio-based (Stage 3) investment management as called for in the ITIM. Business system modernizations costing more than $1 million continued to be certified and approved, but these decisions were not always based on complete information. Further, we concluded that certification and approval decisions may not be sufficiently justified because investments were certified and approved without conditions even though our prior reports had identified program weaknesses that were unresolved at the time of certification and approval. Accordingly, we reiterated existing recommendations and made additional recommendations to address each of these areas. DOD partially agreed with our recommendations and described actions being planned or under way to address them. Nonetheless, DOD’s business systems modernization efforts remain on our high-risk list due in part to issues such as those described above. Furthermore, in 2011, we reported that none of the military department enterprise architecture programs had fully satisfied the requirements of our Enterprise Architecture Management Maturity Framework and recommended that they each develop a plan to do so. Our recommendation further stated that if any department did not plan to address any element of our framework, that department should include a rationale for determining why the element was not applicable. DOD and Army concurred, and Air Force and DON did not. In this regard, DOD stated that Air Force and DON did not have a valid business case that would justify the implementation of all of our framework elements. However, Air Force and DON did not address why the elements called for by our recommendation should not be developed. Further, Army officials stated that the department had not yet issued a plan. To date, none of the military departments have addressed our recommendation. GAO-11-278. management processes. These management controls are vital to ensuring that DOD can effectively and efficiently manage an undertaking with the size, complexity, and significance of its business systems modernization, and minimize the associated risks. DOD continues to take steps to comply with the provisions of the Ronald W. Reagan NDAA for Fiscal Year 2005, as amended, and to satisfy relevant system modernization management guidance. However, despite undertaking activities to address NDAA requirements and its future vision; the department has yet to demonstrate significant results. Specifically, DOD has updated its BEA and is beginning to modernize its corporate business processes, but the architecture is still not federated through development of aligned subordinate architectures for each of the military departments, and it still does not include common definitions for key terms and concepts to help ensure that the respective portions of the architecture will be properly linked and aligned. has not included all business system investments in its fiscal year 2013 budget submission, due in part to an unreliable inventory of all defense business systems. has made limited progress regarding investment management policies and procedures and has not yet established the new organizational structure and guidance that DOD has reported will address statutory requirements. In addition, while DOD implemented a business process reengineering (BPR) review process, the department is not measuring and reporting its results. continues to describe certification actions for its business system investments based on limited information. has fewer staff than it identified as needed to execute its responsibilities for business systems modernization. Specifically, the office of the DCMO, which took over these responsibilities from another office that was disestablished in 2011, reported that it had filled only 82 of its planned 139 positions, with 57 positions vacant. DOD’s limited progress in developing and implementing its federated BEA, investment management policies and procedures, and our related recommendations is due in part, to the roles and responsibilities of key organizations and senior leadership positions being largely undefined. Furthermore, the impact of DOD’s efforts to reengineer its end-to-end business processes has yet to be measured and reported, and efforts to execute needed activities are limited by challenges in staffing the office of the DCMO. Until the long-standing institutional modernization management controls provided for under the act, addressed in our recommendations, and otherwise called for in best practices are fully implemented, it is likely that the department’s business systems modernization will continue to be a high-risk program. Among other things, the act requires DOD to develop a BEA that would cover all defense business systems and their related functions and activities and that would enable the entire department to (1) comply with all federal accounting, financial management, and reporting requirements and (2) routinely produce timely, accurate, and reliable financial information for management purposes. The BEA should also include policies, procedures, data standards, and system interface requirements that are to be applied throughout the department. In addition, the NDAA for Fiscal Year 2012 added requirements that the BEA include, among other things, performance measures that are to apply uniformly throughout the department and a target defense business systems computing environment for each of DOD’s major business processes. Furthermore, the act requires a BEA that extends to (i.e., federates) all defense organizational components and requires that each military department develop a well-defined enterprisewide business architecture and transition plan. According to DOD, achieving its vision for a federated business environment requires, among other things, creating an overarching that can effectively map the taxonomy and associated ontologiescomplex interactions and interdependencies of the department’s business environment. Such a taxonomy and ontologies will provide the various components of the federated BEA with the structure and common vocabularies to help ensure that their respective portions of the architecture will be properly aligned and coordinated. In April 2011, DOD provided additional guidance that calls for the use of ontologies for federating the BEA and asserting systems compliance. In addition, DOD guidance states that, because of the interrelationship among models and across architecture efforts, it is useful to define an overarching taxonomy with common definitions for key terms and concepts in the development of the architecture. The need for such a taxonomy and associated ontologies was derived from lessons learned from federation pilots conducted within the department that showed that federation of architectures was made much more difficult because of the use of different definitions to represent the same architectural data. In addition, we have previously reported that defining and documenting roles and responsibilities is critical to the success of enterprise architecture efforts. More specifically, our Enterprise Architecture Management Maturity Framework calls for a corporate policy that identifies the major players associated with enterprise architecture development, maintenance, and use and provides for a performance and accountability framework that identifies each player’s roles, responsibilities, and relationships and describes the results and outcomes for which each player is responsible and accountable. In 2009, we reported that the then-current version of the BEA (version 6.0) addressed, to varying degrees, missing elements, inconsistencies, and usability issues that we previously identified, but that gaps still remained. In March 2012, DOD released BEA version 9.0, which continues to address the act’s requirements. For example, version 9.0 organizes BEA content around its end-to-end business processes and adds additional content associated with these processes. For example, version 9.0 added the “Accept Purchase Request” subprocess and placed this subprocess in the context of its Procure- to-Pay end-to-end business process. In addition, the Hire-to-Retire end-to-end business process includes the subprocess “Manage Benefits,” which is linked to over 1,200 laws, regulations, and policies, as well as 11 subordinate business activities, such as “Manage Retirement Benefits.” As a result, users can navigate the BEA to identify relevant subprocesses for each end-to-end business process and determine important laws, regulations, and policies, business capabilities, and business rules associated with a given business process. includes enterprise data standards for the Procure-to-Pay and Hire-to- Retire end-to-end business processes. Specifically, as part of the Procure-to-Pay end-to-end business process, enterprise standards for Procurement Data and Purchase Request Data were added. In addition, for the Hire-to-Retire end-to-end business process, DOD updated the Common Human Resources Information Standards, which is a standard for representing common human resources management data concepts and requirements within the defense business environment. As a result, stakeholders can accelerate coordination and implementation of the high priority end-to-end business processes and related statutory requirements. uses a standardized business process modeling approach to represent BEA process models. For example, the BEA uses the business process modeling notation standard to create a graphical representation of the “Accept Goods and Services” business process. Using a modeling approach assists DOD in its effort to eventually support automated queries of architecture information, including business models and authoritative data, to verify investment compliance and validate system solutions. includes performance measures and milestones for initiatives in DOD’s Strategic Management Plan and relates the end-to-end business processes and operational activities documented in the BEA with the plan’s initiatives and performance measures. For example, the BEA identifies that the Procure-to-Pay end-to-end business process is related to the Strategic Management Plan’s measure to determine the percentage of contract obligations competitively awarded. This is important for meeting the act’s new requirements associated with performance measures and to enable traceability of BEA content to the Strategic Management Plan. DOD has defined a federated approach to its BEA that is to provide overarching governance across all business systems, functions, and activities within the DOD. This approach involves the use of semantic web technologies to provide visibility across its respective business architecture efforts. Specifically, this approach calls for the use of non- proprietary, open standards and protocols to develop DOD architectures to allow users to, among other things, locate and analyze needed architecture information across the department. Among other things, DOD’s approach calls for the corporate BEA, each end-to-end business process area (e.g., Procure-to-Pay), and each DOD organization (e.g., Army) to establish a common vocabulary and for the programs and initiatives associated with these areas to use this vocabulary when developing their respective system and architecture products. However, in 2011, we reported that each of the military departments had taken steps to develop architectural content, but that none had well-defined architectures to guide and constrain its business transformation initiatives. Further, since May 2011, the BEA has yet to be federated through development of aligned subordinate architectures for each of the military departments. Specifically, DON reported that it has not made any significant changes to its BEA content. Army reported that it has adopted the end-to-end processes as the basis of the Army BEA, and Air Force reported that it has added additional architecture business process content and mapped some of this content to the end-to-end processes. However, each has yet to fully satisfy the requirements of our Enterprise Architecture Management Maturity Framework. In addition, the BEA does not include other important content that will be needed for achieving the office of the DCMO’s vision for BEA federation. For example, While DOD has begun to develop a taxonomy that provides a hierarchical structure for classifying BEA information into categories, it has yet to develop an overarching taxonomy that identifies and describes all of the major terms and concepts for the business mission area. Further, version 9.0 does not include a systematic mechanism for evaluating and adding new taxonomy terms and rules for addressing ambiguous terms and descriptions. This is important since federation relies heavily on the use of taxonomy to provide the structure to link and align enterprise architectures across the business mission area, thus enabling architecture federation. Without an overarching taxonomy, there is an increased risk of not finding the most relevant content, thereby making the BEA less useful for making informed decisions regarding portfolio management and implementation of business systems solutions. DOD has begun to define corporate BEA ontologies and is developing ontologies in the human resources management area and for the U.S. Transportation Command. However, BEA 9.0 does not include ontologies for all business mission domains and organizations. According to DOD officials, each domain and organization will develop its own ontology. This is important since ontologies promote a comprehensive understanding of data and their relationships. In addition, they enable DOD to implement automated queries of information and integrate information across the department. However, DOD has yet to describe how military departments will be held accountable for executing tasks needed to be accomplished for establishing domain ontologies for their respective BEAs or whether these ontologies are also to be used for their respective corporate enterprise architecture efforts. Without these ontologies, there is an increased risk of not fully addressing the act’s requirements relating to integrating budget, accounting, and program information and systems and achieving DOD’s vision for a federated architecture. DOD officials acknowledged these issues and stated that future versions of the BEA will leverage semantic technologies to create and document a common vocabulary and associated ontology. However, the department has yet to describe how each of the relevant entities will work together in developing the needed taxonomy and ontology. In addition to describing certain content required to be in the BEA, as described earlier, the act assigns responsibility for developing portions of the BEA to various entities. The department has developed strategies that begin to document certain responsibilities associated with architecture federation. For example, the Global Information Grid Architecture Federation Strategy states that the DOD enterprise is responsible for establishing a governance structure for DOD architecture federation. The strategy also states that each mission area, such as the business mission area, is to develop and maintain mission area architectures, such as the BEA. However, given the many entities involved in BEA and DOD architecture federation, officials from the office of the DCMO have expressed concerns over who is accountable for achieving specific federation tasks and activities and how the new vision for BEA federation will be enforced. Another requirement of the NDAA for Fiscal Year 2005, as amended, is that DOD’s annual IT budget submission must include key information on each business system for which funding is being requested, such as the system’s precertification authority and designated senior official, the appropriation type and amount of funds associated with modernization and current services (i.e., operation and maintenance), and the associated Defense Business Systems Management Committee approval decisions. The department’s fiscal year 2013 budget submission includes a range of information for 1,657 business system investments, including the system’s name, approval authority, and appropriation type.submission also identifies the amount of the fiscal year 2013 request that is for development and modernization versus operations and maintenance and notes the certification status (e.g., approved, approved with conditions, not applicable, and withdrawn) and the Defense Business Systems Management Committee approval date, where applicable. However, similar to prior budget submissions, the fiscal year 2013 budget submission does not reflect all business system investments. To prepare the submission, DOD relied on business system investment information (e.g., funds requested, mission area, and system description) that the components entered into the department’s system used to prepare its budget submission (SNAP-IT). In accordance with DOD guidance and according to DOD CIO officials, the business systems listed in SNAP-IT should match the systems listed in the Defense Information Technology Portfolio Repository (DITPR)—the department’s authoritative business systems inventory. However, the DITPR data provided by DOD in March 2012 included 2,179 business systems. Therefore, SNAP-IT did not reflect about 500 business systems that were identified in DITPR. In 2009, we reported that the information between SNAP-IT and DITPR data repositories were not consistent and, accordingly, recommended that DOD develop and implement plans for reconciling and validating the completeness and reliability of information in its two repositories, and to include information on the status of these efforts in the department’s fiscal year 2010 report in response to the act. DOD agreed with the need to reconcile information between the two repositories and stated that it had begun to take actions to address this. In 2011, we reported that, according to the office of the DOD CIO, efforts to provide automated SNAP-IT and DITPR integration work were delayed due to increased SNAP-IT requirements in supporting the fiscal year 2012 budget submission and ongoing reorganization efforts within the department. DOD officials also told us that the department planned to restart the process of integrating the two repositories beginning in the third quarter of fiscal year 2011. Since that time, DOD CIO officials have reiterated the department’s commitment to integrating the two repositories and taken steps toward achieving this end. For example, the officials stated that they have added a field to the DITPR repository that allows components to identify an individual system as a defense business system. These officials added that this change, once fully implemented, will be a key to providing automated DITPR and SNAP-IT integration. The Deputy DOD CIO (Resources) has also sent memoranda to specific DOD components identifying systems listed in DITPR that are not properly associated with systems identified in SNAP-IT and requesting that the components take action to address these inconsistencies. Nevertheless, DOD CIO officials responsible for the DITPR and SNAP-IT repositories stated that efforts to integrate them continue to be limited by ongoing organizational changes and the time required to address new system requirements unrelated to integrating the repositories. For example, these officials cited slowdowns resulting from the recent disestablishment of DOD’s Networks and Information Integration organization, as well as time spent making adjustments to the SNAP-IT repository to accommodate new Office of Management and Budget reporting requirements.data are owned by the components and therefore it is ultimately the responsibility of the components to update their respective data. However, DOD has not established a deadline by which it intends to complete the integration of the repositories and validate the completeness and reliability of information. They added that all Until DOD has a reliable, comprehensive inventory of all defense business systems, it will not be able to ensure the completeness and reliability of the department’s IT budget submissions. Moreover, the lack of current and accurate information increases the risk of oversight decisions that are not prudent and justified. DOD has made limited progress in defining and implementing investment management policies and procedures as required by the act and addressed in our ITIM framework since our last review in 2011. In addition, while the department has reported its intent to implement a new organizational structure and guidance to address statutory requirements, this structure and guidance have yet to be established. DOD also continues to approve investments on the basis of BEA compliance assessments that have not been validated. Further, while DOD has conducted various BPR activities related to its business system investments and underlying business processes, the department has not yet begun to measure associated results. Thus, the extent to which these efforts have streamlined and improved the efficiency of the underlying business processes remains uncertain. The act requires DOD to establish an IRB and investment management processes that are consistent with the investment management provisions of the Clinger-Cohen Act of 1996. As we have previously reported, organizations that satisfy Stages 2 and 3 of our ITIM framework have the investment selection, control, and evaluation governance structures, and the related policies, procedures, and practices that are consistent with the investment management provisions of the Clinger-Cohen Act. We have used the framework in many of our evaluations, and a number of agencies have adopted it. In 2011, we reported that DOD had continued to establish investment management processes described in our ITIM framework but had not fully defined all key practices. For example, we reported that DOD had fully implemented two critical processes associated with capturing investment information and meeting business needs, and partially completed the Stage 2 critical process associated with instituting an investment board. However, the department had yet to address other critical processes, including those associated with selecting investments and providing investment oversight. Since 2011, DOD has not fully implemented any additional key practices.progress in addressing elements of our ITIM framework that we previously reported as unsatisfied. For example, Furthermore, the military departments have made very little In 2011, we reported that Air Force had implemented four key practices related to effectively managing investments as individual business system programs (Stage 2). The Air Force had also addressed a key practice associated with portfolio-level investment management (Stage 3) — assigning responsibility for the development and modification of IT portfolio selection criteria. However, it has not implemented any additional practices since that time. The Air Force has described its intent to change its IT investment management structure and form a new branch to lay the foundation for integrated, efficient IT portfolio management processes; however, according to Air Force officials, this office is not yet fully established and faces competing personnel issues within the department. Further, Air Force officials stated that they are working to update the department’s IT portfolio management and IT investment guidance, but the updates are not expected to be issued until November 2012. In 2011, we reported that DON had implemented four key practices related to effectively managing investments as individual business system programs (Stage 2) and one key practice related to managing IT investments as a portfolio of programs (Stage 3). Since that time, DON has not fully implemented any additional key practices. While the department demonstrated that it has documented policies and procedures related to establishing assessment standards to describe a program’s health (e.g., cost, schedule, and performance), these policies and procedures do not describe the enterprisewide IT investment board’s role in reviewing and making decisions based on this information. Such a description is important because the investment board has ultimate responsibility for making decisions about IT investments. In 2011, we reported that Army had implemented two key practices associated with capturing investment information. Specifically, it had established policies and procedures for collecting information about the department’s investments and had assigned responsibility for investment information collection and accuracy. These are activities associated with effectively managing investments as individual business system programs (Stage 2). However, with regard to managing IT investments as a portfolio of programs (Stage 3), the Army had not fully defined any of the five key practices. Further, since that time, the Army has not fully implemented any additional Stage 2 or Stage 3 practices. Army officials stated that the department has been focused on performing extensive portfolio reviews that are intended to inform many of the ITIM key practices and lead to updates of its investment management policies and procedures. As of April 2012, Army officials stated that the department had completed its first round of portfolio reviews. According to Army officials, the department has also worked to release its Business Systems Information Technology Implementation Plan, which is to provide details for its investment management strategy, due as part of the 2012 Army Campaign Plan; however, this plan has not yet been released. According to the department, the slow progress made on the investment management process at DOD and the military departments in the past year is due, in part, to the department’s activities to address the new NDAA for Fiscal Year 2012 requirements. Specifically, in April 2012, DOD reported that it was in the process of constituting a single IRB. According to DOD, this IRB is to replace the existing governance structure and is to be operational by October 2012. In addition, DOD reported that it intends to incrementally implement an expanded investment review process that analyzes business system investments using common decision criteria and establishes investment priorities while ensuring integration with the department’s budgeting process. The department has stated its intention to use our ITIM model to assess its ability to comply with its related investment selection and control requirements. Further, DOD officials stated that this new investment review process will encompass a portfolio-based approach to investment management that is to employ a structured methodology for classifying and assessing business investments in useful views across the department. DOD officials stated that an initial review of all systems requiring certification under the new NDAA requirements is also planned to be completed by the start of the new fiscal year. While the department has reported its intent to implement this new organizational structure and guidance to address statutory requirements and redefine the process by which the department selects, evaluates, and controls business systems investments, this structure and guidance have yet to be established. DOD officials stated that the process has not yet been completed because they want to make sure they consider the best approach for investment management going forward. Accordingly, DOD is taking a phased approach as described in the department’s congressional report, which it intends to fully implement by October 2012. While it is too soon to evaluate the department’s updated approach to business system investment management, we will further evaluate DOD’s progress in defining and implementing its updated investment review processes in our fiscal year 2013 report on defense business systems modernization. Until DOD redefines and implements its investment management processes by the established deadline and until the military departments make additional progress on their own investment management processes, it is unlikely that the thousands of DOD business system investments will be managed in a consistent, repeatable, and effective manner. Since 2005, DOD has been required to certify and approve all business system modernizations costing more than $1 million to ensure that they meet specific conditions defined in the act. This process includes asserting that an investment is compliant with the BEA. The department continues to approve investments on the basis of architecture compliance. However, the department’s policy and guidance associated with architecture compliance still does not call for compliance assertions to be validated and officials agreed that not all of the compliance information has been validated. Department officials stated that some information associated with the compliance process has been validated, such as information associated with complying with DOD’s Standard Financial Information Structure. recommendations that the department amend existing policy and requirements to explicitly call for such validation to occur. DOD agreed with our findings and recommendations and stated that it planned to assign validation responsibilities and issue guidance that described the methodology for performing validation activities. Nonetheless, the department has not yet addressed our recommendation. The Standard Financial Information Structure is intended to provide a standard financial management data structure and uniformity throughout DOD in reporting on the results of operations. Among other things, BEA compliance is important for helping to ensure that DOD programs have been optimized to support DOD operations. However, as we have reported, without proper validation of compliance assertions, there is an increased risk that DOD will make business system investment decisions based on information that is inaccurate and unreliable. Under DOD’s vision for a semantic BEA, described previously in this report, officials have stated that compliance validations will be conducted automatically using specialized software tools as program architecture artifacts are developed. However, until DOD achieves its semantic BEA vision and addresses our prior recommendation, compliance assertions will continue to be unvalidated. In addition to the requirement that covered business systems be certified and approved to be in compliance with the BEA, the act requires that the Chief Management Officer certify that these business systems have undergone appropriate BPR activities. BPR is an approach for redesigning the way work is performed to better support an organization’s mission and reduce costs. After considering an organization’s mission, strategic goals, and customer needs, reengineering focuses on improving an organization’s business processes. We have issued BPR guidance that, among other things, discusses the importance of having meaningful performance measures to assess whether BPR activities actually achieve In this regard, the act, as amended, identifies the intended results.intended results of BPR reviews such as ensuring that the business process to be supported by the defense business system will be as streamlined and efficient as practicable and the need to tailor commercial- off-the-shelf systems to meet unique requirements or incorporate unique interfaces has been eliminated or reduced to the maximum extent practicable. While DOD has conducted various BPR activities, including preparing BPR assessment guidance; conducting assessments to meet the act’s requirements; and performing other BPR efforts including refining its end- to-end business processes, the department has not yet begun to measure associated results. The department’s BPR activities are summarized as follows: DOD issued interim guidance in April 2010 and final guidance in April 2011 to assist programs in addressing the act’s BPR requirement.This guidance describes the types of documentation required for systems seeking certification, including a standardized BPR assessment form, and illustrates the process for submitting documentation for review and approval. DOD’s final BPR guidance related to system certification generally comports with key practices described in our guidance. For example, DOD’s guidance recognizes the importance of developing a clear problem statement and business case, analyzing the as-is and to-be environments, and developing a change management approach for implementing the new business process. Consistent with its guidance, DOD has begun to implement its BPR review process in an effort to meet the act’s requirements. Specifically, all systems in fiscal year 2011 submitted BPR assessment forms for review. In addition, the DCMO and military department Chief Management Officers are in the process of signing formal determinations that sufficient BPR was conducted with respect to each program. The department has also performed BPR to respond to specific needs that have been identified by departmental components and to refine its end-to-end business processes. For example, the Defense Commissary Agency, in cooperation with the Business Transformation Agency and now the office of the DCMO, used BPR to help formulate a future enterprise transition plan for the agency. In addition, DOD officials described activities to refine DOD’s debt management business process, which is part of the Budget-to-Report end-to-end process. The standardization of related business process models related to debt management led to updates in the latest BEA, which now provide tools that can be used to guide and constrain investments. While DOD has performed the BPR activities described above, the extent to which these efforts have streamlined and improved the efficiency of the underlying business processes remains uncertain because the department has yet to establish specific measures and report outcomes that align with the department’s efforts. For example, the department does not track information, such as the number of systems that have undergone material process changes or the number of interfaces reduced or eliminated as a result of BPR reviews. DOD officials noted that addressing these requirements has been challenging and measuring progress, such as the number of interfaces reduced, has not been a priority. However, until the department develops and reports on performance measures associated with the development of its end-to-end processes and their related BPR activities, the department and its stakeholders will not know the extent to which BPR is effectively streamlining and improving its end-to-end business processes as intended. Among other things, the act requires DOD to include, in its annual report to congressional defense committees, a description of specific actions the department has taken on each business system submitted for certification. As applicable in fiscal year 2011, the act required that modernization investments involving more than $1 million in obligations be certified by a designated approval authority as meeting specific criteria, such as whether or not the system is in compliance with DOD’s BEA and appropriate BPR efforts have been undertaken. Further, the act requires that the Defense Business Systems Management Committee approve each of these certifications. DOD’s annual report identifies that the Defense Business Systems Management Committee approved 198 actions to certify, decertify, or recertify defense business system modernizations. These 198 IRB certification actions represented a total of about $2.2 billion in modernization spending. Specifically, the annual report states that during fiscal year 2011, the Defense Business Systems Management Committee approved 58 unique certifications, 102 recertifications, and 38 decertifications—101 with and 97 without conditions. Examples of conditions associated with individual systems include conditions related to business process engineering and BEA compliance. While DOD has continued to report its certification actions, these actions have been based on limited information, such as unvalidated architecture compliance assertions, as discussed in the previous section. Until DOD addresses our prior recommendations, the department faces increased risk that it will not effectively be able to oversee its extensive business systems investments. Among other things, the act calls for the DCMO to be responsible and accountable for developing and maintaining the BEA, as well as integrating defense business operations. Although responsibility for these activities previously resided with the Business Transformation Agency, DOD announced the disestablishment of this agency in August 2010. In June 2011, we recommended that DOD expeditiously complete the implementation of the announced transfer of functions of the agency and provide specificity as to when and where these functions will be transferred.structure consisting of a front office and six directorates and identified the staff resources it would need to fulfill its new responsibilities, which became effective in September 2011. Subsequently, the DCMO defined an organizational However, the office reported that it has not yet filled many of the positions needed to execute these responsibilities. In particular, as of April 2012, the office reported that it had filled only 82 of its planned 139 positions, with 57 positions (41 percent) remaining unfilled. had filled only 12 of 43 positions within its Technology, Innovation, and Engineering Directorate; which, among other things, is responsible for developing the BEA. Further, only 10 of 19 positions within the Planning and Performance Management Directorate, 14 of 22 positions within its Business Integration Directorate, and 16 of 23 positions within its Investment and Acquisition Management Directorate had been filled. Table 3 identifies the key responsibilities of each DCMO organizational component as well as planned and actual staffing. Establishing a well-defined, federated BEA and modernizing DOD’s business systems and processes are critical to effectively improving the department’s business systems environment. The department is taking steps to establish such a business architecture and modernize its business systems and processes, but long-standing challenges remain. Specifically, while DOD had made progress in developing its corporate enterprise architecture, it has yet to be federated through the development of aligned subordinate architectures for each of the military departments. The department has also taken effective steps to establish an infrastructure for establishing a federated BEA, including documenting a vision for the BEA and developing content around its end-to-end business processes. However, the department’s ability to achieve its federated BEA vision is limited by the lack of common definitions for key terms and concepts to help ensure that each of the respective portions of the architecture will be properly linked and aligned, as well as by the absence of a policy that clarifies roles, responsibilities, and accountability mechanisms. In addition, information used to support the development of the DOD’s budget requests continues to be of questionable reliability and no deadline for validating reliable information has been set. DOD has also not implemented key practices from our ITIM framework since our last review in 2011. Further, while the department has begun taking steps to reengineer its business systems and processes, and has issued sound guidance for conducting BPR associated with individual business systems, it has yet to measure and report on the impact these efforts have had on streamlining and simplifying its corporate business processes. Finally, the efforts of the office of the DCMO have been impacted by having fewer staff than the office identified as needed to support departmentwide business systems modernization. Collectively, these limitations continue to put the billions of dollars spent annually on about 2,200 business system investments that support DOD functions, such as departmentwide financial management and military personnel health care at risk. Our previous recommendations to the department have been aimed at accomplishing these and other important activities related to its business systems modernization. While the department has agreed with these recommendations, its progress in addressing the act’s requirements, its vision for a federated architecture, and our related recommendations is limited, in part, by continued uncertainty surrounding the roles and responsibilities of key organizations and senior leadership positions. In light of this, it is essential that the Secretary of Defense issue a policy that resolves these issues, as doing so is necessary for the department to establish the full range of institutional management controls needed to address its business systems modernization high-risk area. It is equally important that DOD measure the impact of its BPR efforts and include information on the results of these efforts and its efforts to fully staff the office of the DCMO in the department’s annual report in response to the act. Because we have existing recommendations that address many of the institutional management control weaknesses discussed in this report, we reiterate those recommendations. In addition, to ensure that DOD continues to implement the full range of institutional management controls needed to address its business systems modernization high-risk area, we recommend that the Secretary of Defense ensure that the Deputy Secretary of Defense, as the department’s Chief Management Officer, establish a policy that clarifies the roles, responsibilities, and relationships among the Chief Management Officer, Deputy Chief Management Officer, DOD and military department Chief Information Officers, Principal Staff Assistants, military department Chief Management Officers, and the heads of the military departments and defense agencies, associated with the development of a federated BEA. Among other things, the policy should address the development and implementation of an overarching taxonomy and associated ontologies to help ensure that each of the respective portions of the architecture will be properly linked and aligned. In addition, the policy should address alignment and coordination of business process areas, military department and defense agency activities associated with developing and implementing each of the various components of the BEA, and relationships among these entities. To ensure that annual budget submissions are based on complete and accurate information, we recommend that the Secretary of Defense direct the appropriate DOD organizations to establish a deadline by which it intends to complete the integration of the repositories and validate the completeness and reliability of information. To facilitate congressional oversight and promote departmental accountability, we recommend that the Secretary of Defense ensure that the Deputy Secretary of Defense, as the department’s Chief Management Officer, direct the Deputy Chief Management Officer to include in DOD’s annual report to Congress on compliance with 10 U.S.C. § 2222, the results of the department’s BPR efforts. Among other things, the results should include the department’s determination of the number of systems that have undergone material process changes, the number of interfaces eliminated as part of these efforts (i.e., by program, by name), and the status of its end-to-end business process reengineering efforts, and an update on the office of the DCMO’s progress toward filling staff positions and the impact of any unfilled positions on the ability of the office to conduct its work. In written comments on a draft of this report, signed by the Deputy Chief Management Officer and reprinted in appendix II, the department partially concurred with our first recommendation, concurred with our second and third recommendations, and did not concur with the remaining recommendation. The department partially concurred with our first recommendation to establish a policy that clarifies the roles, responsibilities, and relationships among its various management officials associated with the development of a federated BEA. In particular, the department stated its belief that officials’ roles, relationships, and responsibilities are already sufficiently defined through statute, policy, and practice, and that additional guidance is not needed. However, the department added that it will continue to look for opportunities to strengthen and expand guidance, to include the new investment management and architecture processes. We do not agree that officials’ roles, relationships, and responsibilities are sufficiently defined in existing policy. For example, we found that DOD has not developed a policy that fully defines the roles, responsibilities, and relationships associated with developing and implementing the BEA. Moreover, in our view, responsibility and accountability for architecture federation will not be effectively addressed with additional guidance because guidance cannot be enforced. Rather, we believe a policy, which can be enforced, will more effectively establish responsibility and accountability for architecture federation. Without a policy, the department risks not moving forward with its vision for a federated architecture. Thus, we continue to believe our recommendation is warranted. The department concurred with our second recommendation, to establish a deadline by which it intends to complete the integration of the repositories and validate the completeness and reliability of information, and described commitments and actions being planned or under way. We support the department’s efforts to address our recommendation and reiterate the importance of following through in implementing the recommendation within the stated time frame. DOD also concurred with our third recommendation that the Deputy Secretary of Defense, as the department’s Chief Management Officer, direct the Deputy Chief Management Officer to include the results of the department’s BPR efforts in its annual report to Congress. However, the department stated that given the passage of the NDAA for Fiscal Year 2012, BPR authority now rests with the military department Chief Management Officers. As such, DOD stated that it would be appropriate for the recommendation to be directed to the BPR owners. We agree that the act requires the appropriate precertification authority for each covered business system to determine that appropriate BPR efforts have been undertaken. However, we disagree that our recommendation should be directed to the BPR owners. The recommendation is not intended to be prescriptive as to who should measure the impact of the BPR efforts. Rather, it calls for the reporting of the results of such efforts in the department’s annual report to Congress, which is prepared by the office of the DCMO under the department’s Chief Management Officer. The department did not concur with our fourth recommendation to provide an update on the office of the DCMO’s progress toward filling staff positions and the impact of any unfilled positions in its annual report to Congress. DOD stated that it does not believe that the annual report is the appropriate communication mechanism; however, it offered to provide us with an update. While we support the department’s willingness to provide us with an update, we, nonetheless, stand by our recommendation. The purpose of the annual report is to document the department’s progress in improving its business operations through defense business systems modernization. Thus, the potential for staffing shortfalls in the office of the DCMO to adversely impact the department’s progress should be communicated to the department’s congressional stakeholders as part of the report. Including information about the department’s progress in staffing the office that was recently established to be responsible for business systems modernization would not only facilitate congressional oversight, but also promote departmental accountability. We are sending copies of this report to the appropriate congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions on matters discussed in this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. As agreed with the congressional defense committees, our objective was to assess the Department of Defense’s (DOD) actions to comply with key aspects of section 332 of the National Defense Authorization Act (NDAA) for Fiscal Year 2005 (the act), as amended, 10 U.S.C. § 2222 and related federal guidance. These include (1) developing a business enterprise architecture (BEA) and a transition plan for implementing the architecture, (2) identifying systems information in its annual budget submission, (3) establishing a system investment approval and accountability structure along with an investment review process, and (4) certifying and approving any business system program costing in excess of $1 million. (See the background section of this report for additional information on the act’s requirements.) Our methodology relative to each of the four provisions is as follows: To address the architecture, we analyzed version 9.0 of the BEA, which was released on March 15, 2012, relative to the act’s specific architectural requirements and related guidance that our previous annual reports in response to the act identified as not being fully implemented. Specifically, we interviewed office of the Deputy Chief Management Officer (DCMO) officials and reviewed written responses and related documentation on steps completed, under way, or planned to address these weaknesses. We then reviewed architectural artifacts in BEA 9.0 to validate the responses and identify any discrepancies. We also determined the extent to which BEA 9.0 addressed 10 U.S.C. § 2222, as amended by the NDAA for Fiscal Year 2012. In addition, we analyzed documentation and interviewed knowledgeable DOD officials about efforts to establish a federated business mission area enterprise architecture. Further, we reviewed the military departments’ responses regarding actions taken or planned to address our previous recommendations on the maturity of their respective enterprise architecture programs. We did not determine whether the DOD Enterprise Transition Plan addressed the requirements specified in the act, because an updated plan was not released during the time we were conducting our audit work. See, for example, GAO-09-586 and GAO-11-684. To determine whether DOD’s fiscal year 2013 IT budget submission was prepared in accordance with the criteria set forth in the act, we reviewed and analyzed the Report on Defense Business System Modernization Fiscal Year 2005 National Defense Authorization Act, Section 332, dated March 2012, and compared it with the specific requirements in the act. We also compared information contained in the department’s system that is used to prepare its budget submission (SNAP-IT) with information in the department’s authoritative business systems inventory (DITPR) to determine if DOD’s fiscal year 2013 budget request included all business systems and assessed the extent to which DOD has made progress in addressing our related recommendation. In addition, we reviewed DOD’s budget submission to determine the extent to which it addresses 10 U.S.C. § 2222, as amended by the NDAA for Fiscal Year 2012. We also analyzed selected business system information contained in DITPR, such as system life cycle start and end dates, to validate the reliability of the information. We also interviewed officials from the office of DOD’s Chief Information Officer (CIO) to discuss the accuracy and comprehensiveness of information contained in the SNAP-IT system, the discrepancies in the information contained in the DITPR and SNAP-IT systems, and efforts under way or planned to address these discrepancies. To assess the establishment of DOD enterprise and component investment management structures and processes, we followed up on related weaknesses that our previous reports in response to the act have identified as not being fully implemented. Specifically, we interviewed the office of the DCMO and military department officials and reviewed written responses and related documentation on steps completed, under way, or planned to address these weaknesses. We also met with cognizant officials on steps taken to address new investment management requirements of the NDAA for Fiscal Year 2012. Further, we reviewed DOD’s most recent BEA compliance guidance to determine the extent to which it addressed our related open recommendations. Finally, we reviewed business process reengineering documentation provided to support assertions that modernization programs had undergone business process reengineering assessments. To determine whether the department was certifying and approving business system investments with annual obligations exceeding $1 million, we reviewed and analyzed all Defense Business Systems Management Committee certification approval memoranda. We also reviewed IRB certification memoranda issued prior to the Defense Business Systems Management Committee’s final approval decisions for fiscal year 2011. We contacted officials from the office of the DCMO and investment review boards to discuss any discrepancies. In addition, we discussed with officials from the office of the DCMO its plans for updating the investment review process consistent with requirements of the NDAA for Fiscal Year 2012 and obtained related documentation. To assess the office of the DCMO’s progress toward filling staff positions, we compared the number of authorized positions with the staff on board as of late April 2012; reviewed and analyzed related staffing documentation; and interviewed office of the DCMO officials about staffing. We did not independently validate the reliability of the cost and budget figures provided by DOD because the specific amounts were not relevant to our findings. We conducted this performance audit at DOD offices in Arlington and Alexandria, Virginia, from September 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, Neelaxi Lakhmani and Mark Bird, Assistant Directors; Debra Conner; Rebecca Eyler; Michael Holland; Anh Le; Donald Sebers; and Jennifer Stavros-Turner made key contributions to this report.
For decades, DOD has been challenged in modernizing its business systems. Since 1995, GAO has designated DOD’s business systems modernization program as high risk, and it continues to do so today. To assist in addressing DOD’s business system modernization challenges, the National Defense Authorization Act for Fiscal Year 2005 requires the department to take certain actions prior to obligating funds for covered systems. It also requires DOD to annually report to the congressional defense committees on these actions and for GAO to review each annual report. In response, GAO performed its annual review of DOD’s actions to comply with the act and related federal guidance. To do so, GAO reviewed, for example, the latest version of DOD’s business enterprise architecture, fiscal year 2013 budget submission, investment management policies and procedures, and certification actions for its business system investments. The Department of Defense (DOD) continues to take steps to comply with the provisions of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, as amended, and to satisfy relevant system modernization management guidance. While the department has initiated numerous activities aimed at addressing the act, it has been limited in its ability to demonstrate results. Specifically, the department released its most recent business enterprise architecture version, which continues to address the act’s requirements and is consistent with the department’s future vision for developing its architecture. However, the architecture has not yet resulted in a streamlined and modernized business systems environment, in part, because DOD has not fully defined the roles, responsibilities, and relationships associated with developing and implementing the architecture. included a range of information for 1,657 business system investments in its fiscal year 2013 budget submission; however, it does not reflect about 500 business systems, due in part to the lack of a reliable, comprehensive inventory of all defense business systems. has not implemented key practices from GAO’s Information Technology Investment Management framework since GAO’s last review in 2011. In addition, while DOD has reported its intent to implement a new organizational structure and guidance to address statutory requirements, this structure and guidance have yet to be established. Further, DOD has begun to implement a business process reengineering review process but has not yet measured and reported results. continues to describe certification actions in its annual report for its business system investments as required by the act—DOD approved 198 actions to certify, decertify, or recertify defense business system modernizations, which represented a total of $2.2 billion in modernization spending. However, the basis for these actions and subsequent approvals is supported with limited information, such as unvalidated architectural compliance assertions. lacks the full complement of staff it identified as needed to perform business systems modernization responsibilities. Specifically, the office of the Deputy Chief Management Officer, which took over these responsibilities from another office in September 2011, reported that 41 percent of its positions were unfilled. DOD’s progress in modernizing its business systems is limited, in part, by continued uncertainty surrounding the department’s governance mechanisms, such as roles and responsibilities of key organizations and senior leadership positions. Until DOD fully implements governance mechanisms to address these long-standing institutional modernization management controls provided for under the act, addressed in GAO recommendations, and otherwise embodied in relevant guidance; its business systems modernization will likely remain a high-risk program. GAO recommends that the Secretary of Defense take steps to strengthen the department’s mechanisms for governing its business systems modernization activities. DOD concurred with two of GAO’s recommendations and partially concurred with one, but did not concur with the recommendation that it report progress on staffing the office responsible for business systems modernization to the congressional defense committees. GAO maintains that including staffing progress information in DOD’s annual report will facilitate congressional oversight and promote departmental accountability.
Antibiotics are drugs used to treat bacterial infections; antibiotic resistance is the result of changes in bacteria that reduce or eliminate the effectiveness of antibiotics to treat infection. Experts have raised concerns about the lack of new antibiotics being developed to replace antibiotics that have become ineffective because of resistance. This lack of development has been attributed to various challenges. For example, the development of new drugs, including antibiotics, is often a costly and lengthy process. To obtain FDA’s approval to market a new drug, a drug sponsor must conduct extensive research in order to demonstrate that the drug is both safe and effective. Although high costs and failure rates make drug development risky for drug sponsors, creating a safe and effective new drug can be financially rewarding for the drug sponsor and beneficial to the public. However, antibiotics are often less profitable than other drugs because they are generally designed to work quickly and are typically administered for only a brief time. FDA oversees the drug development process and the approval of new drugs for marketing in the United States. FDA generally reviews drug sponsors’ plans for conducting clinical trials and assesses drug sponsors’ applications for the approval of new antibiotics. FDA also releases guidance for industry as a way to communicate its current thinking on a particular topic with the purpose of assisting drug sponsors in the development of drugs. For example, FDA has released several guidance documents to assist drug sponsors with the development of antibiotics, including information on designing clinical trials and measures to demonstrate a drug’s effectiveness. The GAIN provisions required FDA to take certain steps to encourage drug sponsors to develop antibiotics, particularly those that treat serious or life-threatening infections. Specifically, FDA was required each year to review, and revise as appropriate, at least three guidance documents related to the development of antibiotics. FDA was also required to finalize a guidance document on the development of antibacterial drugs for serious or life-threatening infections, particularly in areas of unmet medical need. The GAIN provisions also established the QIDP designation and made incentives available to sponsors of QIDPs—fast track designation, priority review designation, and 5 years of additional market exclusivity. Fast track designation may expedite FDA’s review of an application by allowing drug sponsors to have increased communication with FDA and by allowing FDA to review portions of the application as they come in rather than waiting for a complete application. Although drug sponsors that receive QIDP designation are also entitled to receive fast track designation, drug sponsors must still request this designation and typically do so prior to submitting their application to FDA for review. Priority review has been automatically granted to applications with the QIDP designation. It reduces FDA’s goal time for the review of an application from 10 months to 6 months. Market exclusivity is a statutory provision for exclusive marketing rights for certain periods upon approval of a drug application, if certain statutory requirements are met. Market exclusivity periods last different lengths of time and have different scopes. For example, drugs designated for treatment of rare diseases or conditions may be eligible for orphan drug exclusivity, which lasts for 7 years for the specified conditions. New chemical entity exclusivity provides 5 years of market exclusivity if a drug includes a new active moiety that has not been previously approved by FDA. QIDP-related market exclusivity adds an additional 5 years of exclusivity to certain qualifying drugs. For example, a drug receiving orphan drug exclusivity receives 7 years of exclusivity. If that drug’s application also received a QIDP designation, then the drug may receive an additional 5 years of exclusivity, for a total of 12 years of market exclusivity. Table 1 provides more information on the different incentives associated with the QIDP designation. In 2012, FDA created an internal group called the Antibacterial Drug Development Task Force (Task Force) to coordinate the release of guidance documents for antibiotics as part of its role to promote the development of these types of drugs. The GAIN provisions required FDA to review and revise each year, as appropriate, at least three guidance documents related to antibiotics, in part to ensure the documents reflected scientific developments. FDA officials also reviewed the list of available guidance documents and released guidance on new topics, where needed. Established after the passage of the GAIN provisions, the Task Force is a multidisciplinary group that comprises scientists and clinicians who participate in FDA’s oversight of the drug development process. Members of this Task Force work with experts, such as those from academia, industry, and professional societies, and with patient advocacy groups, to promote the development of new antibiotics. As of August 2016, FDA and the Task Force had coordinated the release of 14 updated or new guidance documents related to the development of antibiotics, although half of these documents remain in draft form. Table 2 lists guidance documents for antibiotics that FDA released in accordance with the GAIN provisions. Half of these 14 documents were finalized versions of prior draft guidance documents, such as FDA’s 2015 guidance on uncomplicated gonorrhea. FDA released the remaining guidance documents in draft form—specifically, the guidance either updated a prior draft guidance or was a brand new draft guidance for a new topic. For example, FDA’s most recent draft guidance on bacterial vaginosis, released in July 2016, is an update to prior draft guidance from July 1998, while FDA’s draft guidance on pulmonary tuberculosis, released in November 2013, is the first guidance on this topic FDA has released. FDA made these draft guidance documents publicly available on its website, in keeping with the agency’s good guidance practices. According to FDA, these draft guidance documents are made available to encourage public comment. Further, the draft guidance documents on developing antibiotics we examined indicated on the title page and on subsequent pages that they were intended for comment purposes only, not for implementation, and would represent the agency’s current thinking “when finalized.” Thus, because the draft documents on the FDA website for public comment have not been finalized, drug sponsors are left without written guidance indicating FDA’s current thinking on a given issue until the agency finalizes the guidance. As of January 2017, FDA had not responded to questions we provided in June 2016 about why some guidance documents remain in draft form or whether drug sponsors should rely on draft guidance documents while developing their drugs. FDA informed us that its responses were in agency clearance. FDA also had previously received a similar request from Congress and told us it would provide responses to us and to Congress simultaneously when the responses are finalized. FDA did not indicate when those responses would be finalized and provided to us. According to FDA officials, they selected the antibiotic-related guidance to review, and potentially revise, for various reasons, such as the emergence of new scientific information. Officials said that they also coordinate with external stakeholders on identifying and addressing challenges to antibiotic development, such as consulting with them about any changes they make to guidance documents. For example, FDA participated in meetings with the Foundation for the National Institutes of Health and worked with the Clinical Trials Transformation Initiative when it revised two of its guidance documents on developing drugs to treat acute bacterial skin and skin structure infections (ABSSSI) and on drug development for hospital-acquired and ventilator-associated bacterial pneumonia (HABP/VABP). In 2013, FDA finalized its revision of a 2010 draft guidance document on developing drugs to treat ABSSSI based, in part, on work on clinical endpoints conducted by the Foundation for the National Institutes of Health. Differences between the finalized 2013 document and the draft 2010 document included changes in the minimum number of clinical trials FDA recommended that a drug sponsor conduct and the clinical endpoints FDA recommended that a sponsor use to assess the effectiveness of a drug. The 2010 guidance document recommended that drug sponsors conduct at least two adequate and well-controlled trials. When FDA released the finalized guidance in 2013, the agency noted that drug sponsors could also provide evidence from a single adequate and well-controlled clinical trial supported by independent evidence, such as data from another trial for a treatment of a different type of infection. Also, the recommended primary clinical endpoint was redefined as a reduction in lesion size of at least 20 percent compared to the lesion size of those patients that did not receive treatment at 48 to 72 hours. FDA made similar changes to its draft 2010 guidance on HABP/VABP based, in part, on work resulting from its coordination with both the Foundation for the National Institutes of Health and the Clinical Trials Transformation Initiative. Industry stakeholders expressed concerns about this 2010 guidance document that included the agency’s use of all-cause mortality (i.e., death from any cause) as the only clinical endpoint recommended to measure the effectiveness of a drug. Stakeholders also raised concerns about the restrictive inclusion criteria that were used to determine which patients could be enrolled in clinical trials. In 2014, FDA released a revised draft HABP/VABP guidance document that included changes aimed at addressing these concerns. Specifically, FDA’s recommendations included endpoints besides all-cause mortality and modified the criteria for clinical trial enrollment. Making the criteria for enrollment less restrictive could make it easier to enroll more patients into clinical trials. Enrolling patients into clinical trials is a particular challenge noted by most of the drug sponsors included in our study. The GAIN provisions included another requirement for FDA to release a draft guidance document on unmet medical need by the end of June 2013, and a final version of this guidance by the end of 2014. In July 2013, FDA released its draft guidance—Guidance for Industry Antibacterial Therapies for Patients with Unmet Medical Need for the Treatment of Serious Bacterial Diseases. According to FDA, the agency did not meet the statutory deadline for publishing finalized guidance because of concerns that release of such guidance could create confusion within the industry while Congress considers draft legislation for a potential limited population antibacterial drug approval pathway. In October 2016, FDA told us that the agency intended to release the final guidance later that year, once the legislative disposition of this limited population approval pathway was clear. However, as of January 2017, this guidance document remained in draft form, and FDA had not released a finalized version. FDA recently told us that the agency plans to release finalized guidance “soon.” Although it is too soon to determine if GAIN has actually stimulated the development of new antibiotics, FDA has taken steps to implement the GAIN provision related to the QIDP designation and the incentives associated with the approval of a QIDP drug. Since 2012, FDA has granted 101 requests for the designation. Six drugs with the designation have been approved, but they were in the later stages of the drug development process than the other drugs for which the designation was requested when they received this designation. FDA data from July 9, 2012, (when the QIDP designation was created) through December 31, 2015, showed that FDA granted almost all of the requests that it received for the QIDP designation. Specifically, drug sponsors submitted more than 100 requests to FDA for QIDP designation during this time, of which FDA granted more than 90 percent. (See table 3.) Drug sponsors whose products received the QIDP designation are also eligible to receive fast track designation. From July 2012 through December 2015, FDA granted fast track designation to all of the drug sponsors whose applications received QIDP designation and also requested fast track designation. (See table 4.) However, not all drug sponsors that requested the QIDP designation also requested fast track designation. For example, in fiscal year 2015 (the most recent year for which a full year of data are available), FDA granted 25 requests from drug sponsors for the QIDP designation, making them also eligible for fast track designation. While FDA granted 11 requests from sponsors for fast track designation, sponsors could have requested fast track designation in 14 additional instances, but they did not do so. According to FDA officials, this is likely the case for one of three reasons: (1) a drug sponsor has chosen not to request fast track designation, (2) a drug sponsor has not yet requested fast track designation but may intend to; or (3) the drug sponsor received fast track designation prior to the creation of the QIDP designation in 2012, and this is not reflected in the counts. As of October 2016, FDA had approved applications for six drugs that received the QIDP designation for marketing in the United States, though these drugs were in the later stages of their development program when the designation was created in July 2012. (See table 5.) Four of the six approved drug applications also had fast track designation, and all of them received priority review. FDA did not grant fast track designation to two of these applications because, according to the sponsors, they did not apply for it. To date, FDA has determined that five of these six drugs are eligible for 5 years of additional market exclusivity because each drug were eligible for at least one other type of market exclusivity and the statutory requirements for the 5 additional years were met. All 10 of the drug sponsors in our study said that increased communication with FDA or the expedited review of their QIDP- designated drug applications due to GAIN were beneficial to their drug development processes. Five drug sponsors said that the QIDP designation was particularly helpful to smaller sponsors. This was confirmed by 3 of the smaller drug sponsors in our study that said the QIDP designation helped continue their respective development programs for antibiotics. Research indicates that as of May 2016, over 80 percent of the antibiotic products in development were being developed by small drug sponsors. Eight of the drug sponsors in our study attributed this increase in communication with FDA, typically associated with the fast track designation, to the agency’s implementation of the GAIN provisions and general commitment to supporting the development of new antibiotics. For example, 2 of the 10 sponsors specifically noted that they had not experienced this level of access to FDA officials during the review of their other, non-QIDP-designated drug applications. Five of the 10 sponsors said that FDA was receptive to considering new approaches to the design of their clinical trials. For example, one drug sponsor stated that FDA officials worked with sponsors to find innovative ways to design antibiotic clinical trials allowing them to more quickly advance their drugs through the development process. All of the four drug sponsors in our study whose drugs were approved with a QIDP designation also received priority review designation, which helped to advance their applications more quickly through the FDA review process. FDA data showed that these drugs had an average total review time of about 8 months, which is consistent with FDA’s priority review goal time for new molecular entities. Five drug sponsors told us that priority review, for which QIDP-designated drugs have automatically qualified, was a particularly valuable incentive associated with the QIDP designation and GAIN provisions. According to FDA, the agency did not typically provide priority review to antibiotic drug applications before the GAIN provisions were enacted because these drugs generally do not meet the criteria to receive this designation. Several of the drug sponsors in our study expressed concern about how much they could rely on FDA’s draft guidance documents or the lack of guidance describing the QIDP designation and its requirements. FDA released half of the guidance documents it revised in accordance with the GAIN provisions in draft form, and several of these documents were updates to prior draft guidance. Although these guidance documents were in draft form, FDA made them publicly available on its website and offered them as examples of guidance they had revised to encourage antibiotic development in response to GAIN. According to FDA, guidance documents represent FDA’s current thinking on a topic and serve as a way for the agency to communicate with drug sponsors for the purpose of assisting them in developing drugs. However, the FDA draft guidance documents on developing antibiotics that we examined indicated on the title page and on subsequent pages that they were intended for comment purposes only, not for implementation, and would represent the agency’s current thinking “when finalized.” Thus, while these draft guidance documents can include updated information, it is unclear if the new information represents FDA’s current thinking on a topic or if the purpose of the draft guidance is limited to generating public comment. Three of the 10 drug sponsors expressed concern about the role of FDA’s draft guidance. For example, a drug sponsor called some of FDA’s guidance “dated” and noted that FDA’s guidance documents have remained in draft form for a long time. Another drug sponsor said it would rather rely on final guidance not directly applicable to its work than on draft guidance that was directly applicable. A third drug sponsor noted FDA’s draft unmet need guidance was issued in 2013, and said that it would be helpful if FDA finalized it. Further, five sponsors in our study said that elements of the QIDP designation were unclear and that written guidance from FDA related to the QIDP designation and its incentives would help them to better understand the requirements and associated benefits. For example, one of these five drug sponsors told us that guidance about the eligibility requirements and process for obtaining a fast track designation for sponsors’ QIDP-designated drug applications would be helpful. Our review of FDA data showed that from July 2012 through December 2015, drug sponsors that were eligible for fast track designation because they had received QIDP designation for their drug applications could have made 40 additional requests for fast track designation, but did not. Further, we found that the number of requests for fast track designation for eligible QIDP-designated drugs decreased from 20 in fiscal year 2014 to 11 in fiscal year 2015 (the two most recent years for which a full year of data were available). This same drug sponsor told us that the sponsors who did not request a fast track designation might have been unsure about what information to include in their request or could have been unsure if their applications qualified for the designation. These five drug sponsors also said that elements associated with the market exclusivity incentive for which a QIDP-designated drug application might qualify were also unclear. For example, a drug sponsor we spoke with said that uncertainty about how FDA was implementing the additional years of market exclusivity associated with this designation could affect a drug sponsor’s long-term planning for drug development. FDA did not provide us with answers to our questions about why some QIDP-related guidance documents remain in draft form or whether drug sponsors should rely on draft guidance when developing antibiotics, FDA officials did agree that guidance related to the QIDP designation might be helpful to drug sponsors. FDA officials said the agency frequently receives questions about the QIDP designation and application process, but did not provide an explanation for why they have not yet developed QIDP guidance. They said they consider a range of factors when determining which guidance documents to develop, such as new scientific developments. Federal internal control standards on information and communication state that sharing quality information with external parties is necessary to achieving an entity’s objectives. FDA uses its guidance documents to communicate its current thinking on a topic to drug sponsors with the purpose of assisting them as they develop new drugs, including antibiotics. FDA also implements other expedited programs and has developed written guidance that details their features and the criteria a drug sponsor must meet to take advantage of them. However, a lack of clarity on the role of draft guidance documents for and a lack of guidance on the QIDP designation could create uncertainty for drug sponsors about how much reliance they should place on these draft documents and could diminish the likelihood that some drug sponsors apply for the designation because they do not fully understand its requirements and benefits. As a result, drug sponsors that are working to develop new antibiotics could be doing so without the benefit of clear current guidance and the likelihood that the QIDP designation will encourage the development of new antibiotics may be diminished. Amid growing concern about antibiotic resistance to existing drugs and a steady decline in the number of new antibiotics under development, the QIDP designation is a tool for FDA to encourage drug sponsors to develop new antibiotics by facilitating the review of applications for these drugs prior to their approval. FDA revised guidance documents related to antibiotic development in accordance with GAIN, but then released many of them in draft form with language suggesting that the guidance is for discussion purposes only. Several of the drug sponsors with whom we spoke expressed concern about the agency’s use of draft guidance and how much reliance they could place upon it. Although FDA has granted 101 requests for the QIDP designation since being implemented in 2012, half of the sponsors that we spoke with said they were unclear about how to apply for fast track designation if QIDP designation has been granted and about how FDA is interpreting and applying the market exclusivity incentive. Moreover, FDA has not produced written guidance that describes the requirements and benefits of the QIDP designation. Without a clear understanding of the role of draft guidance documents for and the requirements and benefits of the QIDP designation, drug sponsors’ antibiotic development programs might not be based on FDA’s most current thinking and sponsors might not take advantage of the designation. Therefore, the likelihood that revised guidance on antibiotic development and the QIDP designation will encourage the development of new antibiotics may be diminished. In order for drug sponsors to benefit from FDA’s revised guidance on antibiotic development and take full advantage of the QIDP designation, we recommend that FDA clarify how drug sponsors should utilize draft guidance documents that were released in accordance with GAIN, and develop and make available written guidance on the QIDP designation that includes information about the process a drug sponsor must undertake to request the fast track designation and how the agency is applying the market exclusivity incentive. We provided a draft of this report to HHS for its review and comment. HHS provided written comments, which are reprinted in appendix III. In its written comments, HHS agreed to consider our first recommendation and to implement our second recommendation. With regard to our first recommendation, which calls for FDA to clarify how drug sponsors should utilize the draft GAIN guidance documents, HHS stated that these documents are provided to enable public comment on ideas FDA is considering. HHS noted that they are not binding and that drug sponsors can use an alternative approach, if the approach satisfies the requirements of applicable statutes. We appreciate FDA’s consideration of this recommendation, and firmly believe this clarification is warranted. GAIN called for FDA to issue draft guidance documents and to review and revise them, as appropriate, to reflect scientific developments. FDA’s practice of releasing guidance that remains in draft form for years has created uncertainty for drug sponsors in this area. This approach leaves sponsors to plan their antibiotic drug development programs and determine how to meet statutory requirements without the benefit of written guidance representing FDA’s most current thinking. Therefore, we believe our recommendation that FDA clarify how drug sponsors should use these draft documents is appropriate and well- supported. HHS agreed with our second recommendation that FDA develop and issue written guidance on the QIDP designation. HHS said that FDA intends to proceed promptly with this task and that it will include information about the process for requesting the fast track designation and the criteria for determining whether an application qualifies for the 5 additional years of QIDP market exclusivity. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of HHS, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. FDA has other efforts related to antimicrobial resistance beyond the steps taken to implement the Generating Antibiotic Incentives Now provisions described in this report. The following describes some of the agency’s activities outside of the Center for Drug Evaluation and Research and the Antibiotic Drug Development Task Force. According to FDA, the FDA Antimicrobial Resistance Task Force comprises principle subject matter experts from across the agency. Members include the Center for Drug Evaluation and Research, the Center for Biologics Evaluation and Research, and the Center for Devices and Radiological Health. According to FDA, this Task Force is intended to foster dialogue between the Centers and to ensure that senior staff are apprised of major activities and developments related to antimicrobial resistance across the agency. In addition to coordinating activities within FDA, the Antimicrobial Resistance Task Force coordinates with external stakeholders, such as the Biomedical Advanced Research and Development Authority on developing a network of sites for clinical trials for antibiotics. It also serves as the agency’s primary lead on other efforts, such as the President’s Combating Antibiotic-Resistant Bacteria initiative. The Center for Biologics Evaluation and Research within FDA regulates biological products for human use, including nontraditional therapies that have the potential to be marketed as alternatives to antibiotics. According to officials, there are three types of nontraditional therapies under development—phage therapy, probiotics, and fecal microbiota. Lytic bacteriophage (phage) therapy involves the use of viruses to destroy the cell membrane of bacteria. Interest in this type of therapy has become relevant, in part, because of the growing global threat of antimicrobial resistance. This type of therapy has been reported to have been used in Eastern Europe; however, use in the United States has only been investigational. Probiotics, also referred to as live biotherapeutic products, are products that contain live organisms, such as bacteria, that are found naturally in humans. Proposed uses include the prevention or treatment of diarrhea associated with infections by certain bacteria or with use of antibiotics. Fecal microbiota for transplantation refers to a procedure in which fecal matter or bacteria from the stool of healthy individuals are administered to a patient to treat a disease. Infection by certain bad bacteria can result in debilitating and sometimes fatal diarrhea. The Center for Devices and Radiological Health is the Center within FDA that is responsible for overseeing the safety and effectiveness of medical devices sold in the United States, including in vitro diagnostics. In vitro diagnostics include antimicrobial susceptibility testing devices, which are used by laboratories to assess whether a particular bacterium is susceptible or resistant to single or multiple antibiotics. These tests are based on antibiotic breakpoints, which are the antibiotic concentrations used to determine bacterial susceptibility. Officials said they work with the Center for Drug Evaluation and Research to jointly publish guidance related to antimicrobial susceptibility testing devices. According to officials, the challenges to the development of antimicrobial susceptibility testing devices are mostly scientific. For example, they said it is inherently challenging to develop a diagnostic test that quickly and accurately identifies if a patient’s symptoms are caused by a bacteria or virus, and for bacterial infections, whether the bacterium is resistant to standard therapies. The guidance documents listed below were released by the Food and Drug Administration in relation to the Generating Antibiotic Incentives Now statute as of August 2016. Department of Health and Human Services. Food and Drug Administration. Bacterial Vaginosis: Developing Drugs for Treatment Guidance for Industry (Draft). Silver Spring, Md.: 2016. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM510948.pdf Department of Health and Human Services. Food and Drug Administration. Vulvovaginal Candidiasis: Developing Drugs for Treatment Guidance for Industry (Draft). Silver Spring, Md.: 2016. http://www.fda.gov/ucm/groups/fdagov-public/@fdagov-drugs- gen/documents/document/ucm509411.pdf Department of Health and Human Services. Food and Drug Administration. Anthrax: Developing Drugs for Prophylaxis of Inhalational Anthrax Guidance for Industry (Draft). Silver Spring, Md.: 2016. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM070986.pdf Department of Health and Human Services. Food and Drug Administration. Uncomplicated Gonorrhea: Developing Drugs for Treatment Guidance for Industry. Silver Spring, Md.: 2015. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM401620.pdf. Department of Health and Human Services. Food and Drug Administration. Complicated Urinary Tract Infections: Developing Drugs for Treatment Guidance for Industry. Silver Spring, Md.: 2015. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM070981.pdf. Department of Health and Human Services. Food and Drug Administration. Complicated Intra-Abdominal Infections: Developing Drugs for Treatment Guidance for Industry. Silver Spring, Md.: 2015. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM321390.pdf. Department of Health and Human Services. Food and Drug Administration. Guidance for Industry Hospital-Acquired Bacterial Pneumonia and Ventilator-Associated Bacterial Pneumonia: Developing Drugs for Treatment (Draft). Silver Spring, Md.: 2014. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM234907.pdf. Department of Health and Human Services. Food and Drug Administration. Guidance for Industry Community-Acquired Bacterial Pneumonia: Developing Drugs for Treatment. Silver Spring, Md.: 2014. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM123686.pdf. Department of Health and Human Services. Food and Drug Administration. Guidance for Industry Pulmonary Tuberculosis: Developing Drugs for Treatment (Draft). Silver Spring, Md.: 2013. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM373580.pdf. Department of Health and Human Services. Food and Drug Administration. Guidance for Industry Acute Bacterial Skin and Skin Structure Infections: Developing Drugs for Treatment. Silver Spring, Md.: 2013. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/ucm071185.pdf Department of Health and Human Services. Food and Drug Administration. Guidance for Industry Antibacterial Therapies for Patients with Unmet Medical Need for the Treatment of Serious Bacterial Diseases (Draft). Silver Spring, Md.: 2013. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInfo rmation/Guidances/UCM359184.pdf. In addition to the contact named above, Tom Conahan, Assistant Director; Gay Hee Lee, Analyst-in-Charge; George Bogart; Laurie Pachter; Sarah M. Sheehan; and Katherine A. Singh made key contributions to this report.
Antibiotics have long played a key role in treating infections, but this role is threatened by growing resistance to existing antibiotics and the decline in the development of new drugs. FDA oversees the approval of drugs for the U.S. market. The GAIN provisions created the QIDP designation and its associated incentives to encourage the development of new drugs to treat serious or life-threatening infections. While it is too soon to tell if GAIN has stimulated the development of new drugs, GAO was asked to provide information on FDA's efforts to implement GAIN. This report examines (1) steps FDA has taken to encourage the development of antibiotics since GAIN, and (2) drug sponsors' perspectives on FDA's efforts. GAO analyzed FDA data on requests for the QIDP designation from July 2012 through December 2015 (the most recent data available at the time of GAO's review), and on drugs approved with this designation during the same time frame. GAO reviewed relevant FDA guidance documents and internal controls. GAO interviewed FDA officials and obtained information from a nongeneralizable selection of 10 drug sponsors about FDA efforts. The Food and Drug Administration (FDA) released updated or new guidance for antibiotic development, and used the qualified infectious disease products (QIDP) designation to encourage the development of new antibiotics. As of August 2016, FDA, an agency within the Department of Health and Human Services (HHS), had coordinated the release of 14 updated or new guidance documents on antibiotic development, in compliance with Generating Antibiotic Incentives Now (GAIN) provisions of the Food and Drug Administration Safety and Innovation Act of 2012. However, half of these guidance documents remain in draft form. The GAIN provisions required FDA to review and, as appropriate, revise guidance documents related to antibiotics, in part to ensure that they reflected scientific developments. FDA used the QIDP designation and its incentives, created under the GAIN provisions, to encourage drug sponsors to develop new antibiotics. Incentives available under the QIDP designation include eligibility for fast track designation—which allows drug sponsors to have increased interaction with FDA and allows FDA to review portions of sponsors' applications as they come in rather than waiting for a complete application. FDA granted 101 out of 109 requests for the QIDP designation from July 2012, when the designation was created, through December 2015. FDA also approved six drugs with the QIDP designation for market in the United States. The 10 drug sponsors in GAO's study said that GAIN has facilitated FDA's review of their drug applications, and 8 said they experienced increased communication with FDA due to the agency's implementation of GAIN and general commitment to supporting the development of new antibiotics. However, several were uncertain whether they could rely on FDA's draft guidance or were concerned about the lack of guidance describing the QIDP designation and its requirements. For example, 2 drug sponsors expressed concern about the role of FDA's draft guidance. A third sponsor said it would be helpful if certain draft guidance was finalized. Five additional drug sponsors in GAO's study stated that written guidance from FDA related to the QIDP designation and its incentives, such as the process for obtaining fast track designation for a QIDP-designated application, would help them to better understand the requirements and its associated benefits. FDA released some of its updated guidance documents in draft form on its website, in response to GAIN, which indicated that they were intended for comment purposes only, not for implementation, and would represent the agency's current thinking “when finalized.” While these draft guidance documents can include updated information, it is unclear if the new information represents FDA's current thinking on a topic or if the purpose of the draft guidance is limited to generating public comment. Internal control standards for the federal government on information and communication state that sharing quality information with external parties is necessary to achieving an entity's objectives. The lack of clarity on the role of draft guidance for and the lack of written guidance on the QIDP designation create uncertainty for drug sponsors about how much reliance they should place on these draft documents and could diminish the likelihood that drug sponsors apply for the designation because they do not fully understand its requirements and benefits. GAO recommends that FDA clarify the role of draft guidance for and develop written guidance on the QIDP designation to help drug sponsors better understand the designation and its associated incentives. HHS said it would consider GAO's first recommendation and agreed with the second. GAO believes the first recommendation should also be adopted.
The Social Security Act requires that payments to MA plans be adjusted for variation in the cost of providing health care to beneficiaries on the basis of various risk factors, including health status. For most MA beneficiaries, CMS uses its CMS-Hierarchical Condition Categories (CMS-HCC) model to risk-adjust payments to MA plans. HCCs, which represent major medical conditions, are groups of medical diagnoses where related groups of diagnoses are ranked on the basis of disease severity and cost. The CMS-HCC risk adjustment model uses enrollment and claims data from Medicare FFS. The model uses beneficiary characteristic and diagnostic data from a base year to calculate the expected health care expenditures under Medicare FFS for the following year.demographic data for 2009 to determine the risk scores used to adjust payments to MA plans in 2010. For example, CMS used MA beneficiary diagnostic and The Social Security Act further directs CMS to ensure that adjustments to payments for health status reflect changes in treatment and coding practices in Medicare FFS and differences in coding patterns between MA plans and providers under Medicare Parts A and B to the extent differences are identified. To ensure payment accuracy, the act requires CMS to conduct an annual analysis of these coding differences in time to incorporate the results into the risk scores for each year. In conducting such an analysis, CMS is required to use data submitted with respect to 2004 and subsequent years as such data are available. CMS is required to continue adjusting risk scores for coding differences until the time it is able to implement risk adjustment using MA diagnostic, cost, and use data. Beginning with 2014, the diagnostic coding adjustments are subject to a statutory minimum, which was recently amended by the American Taxpayer Relief Act of 2012. Specifically, the Social Security Act, as amended, requires CMS to reduce MA risk scores by at least 1.5 percentage points more than the 2010 adjustment (a total of 4.9 percent) for 2014 and increases the annual minimum percentage reduction to not less than 5.9 percent for 2019 and subsequent years. CMS did not adjust MA risk scores in 2008 or 2009, the first years for which a diagnostic coding adjustment was required under the Deficit Reduction Act of 2005. However, for 2010 CMS estimated that 3.4 percent of MA beneficiary risk scores were attributable to differences in diagnostic coding over the previous 3 years and reduced MA beneficiaries’ 2010 risk scores by 3.4 percent. To calculate this percentage, CMS estimated the annual difference in disease score growth between MA and Medicare FFS beneficiaries for three different groups of beneficiaries who were either enrolled in the same MA plan or in Medicare FFS from 2004 to 2005, 2005 to 2006, and 2006 to 2007. CMS accounted for differences in age and mortality when estimating the difference in disease score growth between MA and Medicare FFS beneficiaries for each period. Then, CMS calculated the average of the three estimates. To apply this average estimate to 2010 MA beneficiaries, CMS multiplied the average annual difference in risk score growth by its estimate of the average length of time that 2010 MA beneficiaries had been continuously enrolled in the same MA plans over the previous 3 years, and CMS multiplied this result by 81.8 percent, its estimate of the percentage of 2010 MA beneficiaries who were enrolled in an MA plan in 2009 and therefore were previously exposed to MA coding practices. CMS applied the same 3.4 percent adjustment to risk scores it used in 2010 to risk scores in 2011 and 2012. CMS’s risk score adjustment for diagnostic coding differences for 2010 through 2012 was too low. Specifically, we estimated that the cumulative impact of coding differences on risk scores increased from 2010 through 2012 and was greater than CMS’s risk score adjustment of 3.4 percent for each of the 3 years. In updating our analysis from the January 2012 report, we estimated that cumulative risk scores in 2010 were 4.2 percent higher than they likely would have been if the same beneficiaries had been enrolled continuously in FFS. Using the methodology described earlier, we estimated that differences in diagnostic coding resulted in 2011 risk scores that were 4.6 to 5.3 percent higher than they likely would have been if the same beneficiaries had been continuously enrolled in FFS. This upward trend continued for 2012, with estimated risk scores 4.9 to 6.4 percent higher (see fig. 1). Because CMS’s adjustment to risk scores for 2010 through 2012 to account for diagnostic coding differences was too low, we estimated that MA plans received excess payments of between $3.2 billion and $5.1 billion over the 3-year period. CMS’s annual 3.4 percent reduction in risk scores is equivalent to reducing total payments to MA plans by an estimated $2.8 billion in 2010, $3.0 billion in 2011 and $3.2 billion in 2012. According to our estimates, the amount of the excess payments to MA plans after accounting for CMS’s adjustments was $0.6 billion in 2010, $1.1 billion to $1.6 billion in 2011, and $1.5 billion to $2.9 billion in 2012. (See fig. 2.) According to CMS officials, the agency used the same methodology to determine the 2013 risk score adjustment as it used in 2011 and 2012, resulting in a risk score adjustment of 3.4 percent for 2013. CMS conducted a data-based analysis of coding differences for 2013 and considered the results, along with other factors, in determining the adjustment provided for in the Social Security Act. To conduct its data- based analysis, CMS officials reported that they used the same methodology they used to calculate the 3.4 percent adjustment for 2010, but incorporated more recent data. In addition to the results of their data-based analysis, CMS officials told us that they took into account other factors when determining the 2013 risk score adjustment, such as payment changes made to the MA program under the Patient Protection and Affordable Care Act, the stability of the MA program, and the maintenance of benefits for seniors. CMS’s risk score adjustment of 3.4 percent for 2013 is the same that it used in 2010, 2011, and 2012. The Social Security Act does not prescribe the methodology that CMS is to use in adjusting for differences in diagnostic coding and, in this regard, it provides the agency with discretion in designing and conducting its analysis of coding differences. However, the express purpose of the requirements to conduct and incorporate a data-based analysis of coding differences into the risk scores is to ensure payment accuracy. The statute does not provide for factors other than the results of the analysis to be incorporated into the adjustment, suggesting that accuracy would be achieved through the incorporation of these analytical results by themselves. In response to our inquiries, CMS officials told us that they reviewed a range of options when determining an adjustment for 2013, but did not explain whether these options were derived from the agency’s data-based analysis. CMS officials did not identify the specific source of their authority to consider factors other than the required data-based analysis when determining the adjustment amount, but stated that they believed that there was policy discretion with respect to the most appropriate adjustment factor for the payment year. While CMS did not change its risk score adjustment methodology for 2013, agency officials said they may revisit their methodology for future years. While diagnostic coding adjustments are subject to a statutory minimum beginning in 2014, CMS may implement a diagnostic coding adjustment that is greater than the statutory minimum if they determine the minimum is too low. Risk adjustment is important to ensure that payments to MA plans adequately account for differences in beneficiaries’ health status and to maintain plans’ financial incentive to enroll and care for beneficiaries regardless of their health status. Our work confirms that differences in diagnostic coding caused risk scores for MA beneficiaries to be higher than those for comparable beneficiaries in Medicare FFS in 2010, 2011, and 2012. CMS’s decision to use a 3.4 percent adjustment to risk scores for 2010 through 2012 instead of the higher adjustments called for by our analysis resulted in excess payments to MA plans. The existence of such excess payments indicates that CMS’s adjustment does not accurately account for differences in treatment and diagnostic coding between MA plans and Medicare FFS—the stated goal of the statute that required CMS to develop a diagnostic coding adjustment. In our January 2012 report, we recommended that CMS take steps to improve the accuracy of the adjustment to account for excess payments due to differences in diagnostic coding. We noted that CMS could, for example, account for additional beneficiary characteristics, include the most recent data available, identify and account for all the years of coding differences that could affect the payment year for which an adjustment is made, and incorporate the trend of the impact of coding differences on risk scores. CMS’s adjustment for 2013 is the same as it used in 2010, 2011, and 2012. However, given our finding that this adjustment was too low and resulted in estimated excess payments to MA plans of at least $3.2 billion, we continue to believe that it is important for CMS to implement our recommendation that it update its methodology to more accurately account for differences in diagnostic coding. We provided a draft of this report to CMS for comment. CMS did not have any general comments. The agency provided one technical comment, which we incorporated. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, interested congressional committees, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This appendix describes the scope and methodology that we used to address our objective to determine the extent to which differences, if any, in diagnostic coding between Medicare Advantage (MA) plans and Medicare fee-for-service (FFS) affected risk scores and payments to MA plans in 2010, 2011, and 2012. This methodology includes enhancements from the methodology we used in our January 2012 report. To determine the extent to which differences, if any, in diagnostic coding between MA plans and Medicare FFS affected MA risk scores in 2010, 2011, and 2012, we compared actual risk score growth for beneficiaries in our MA study population with the estimated risk score growth these beneficiaries would have had if they had been enrolled in Medicare FFS. For our estimates, we assumed MA plans and Medicare FFS had equivalent coding patterns in 2006—the year of coding upon which 2007 risk scores were based—and therefore calculated the cumulative effect of coding differences starting with risk score data from 2007. Because 2010 risk score data were the most current data available at the time of our analysis, we projected the extent of coding differences in 2011 and 2012 on the basis of trends from 2005 through 2010. To do these analyses, we used Centers for Medicare & Medicaid Services (CMS) enrollment and risk score data from 2004 through 2011. Risk scores for a given calendar year are generally based on beneficiaries’ diagnoses in the previous year, so we identified our study population on the basis of enrollment data for 2004 through 2009 and analyzed risk scores for that population for 2005 through 2010. Our MA study population consisted of a retrospective cohort of 2010 MA beneficiaries who were enrolled in MA for all of 2009 and either MA or FFS in 2008, and we followed them back for the length of their continuous enrollment to 2004. Beneficiaries remained in our cohort for as many years as they were continuously enrolled in MA, plus (if applicable) the year of Medicare FFS enrollment immediately preceding their MA enrollment. We calculated disease scores using the 2010 version of the CMS-HCC risk adjustment community model (used for payment in 2010), and summed the appropriate coefficients for each of the HCC variables. We normalized disease scores for each year to 2005 by using the FFS normalization factor that CMS used to normalize risk scores in 2010. Normalization keeps the average Medicare FFS risk score constant at 1.0 over time and is necessary to compare disease scores across years. change in disease scores that would have occurred if those MA beneficiaries had been continuously enrolled in FFS. We then compared these estimates of disease score growth under FFS to the MA beneficiaries’ actual disease score growth and, because the regression model accounted for other relevant factors affecting disease score growth, we identified the difference as attributable to coding differences between MA and FFS. To calculate the cumulative impact of coding differences on 2010 MA risk scores, we summed the 2007 to 2008, 2008 to 2009, and 2009 to 2010 impact estimates for our retrospective MA cohort, weighting each impact estimate by the percentage of 2010 MA beneficiaries that could have experienced risk score growth due to coding differences during that period. To convert the cumulative impact of coding differences on 2010 MA risk scores into a percentage, we divided by the average MA risk score in 2010. To calculate the cumulative impact of coding differences on 2011 and 2012 MA risk scores, we used a similar methodology, except that we had to estimate the risk score growth due to coding differences for 2010 to 2011 and 2011 to 2012. To estimate the risk score growth due to coding differences for 2010 to 2011 and 2011 to 2012, we used the estimates based on our 2010 retrospective cohorts for five time periods—2005 to 2006, 2006 to 2007, 2007 to 2008, 2008 to 2009, and 2009 to 2010—and made two different projection assumptions. One projection, which we used to calculate our high cumulative impact estimate, assumed that the trend over 2005 to We also provided a more conservative 2010 continued through 2012.estimate that assumed the annual growth trend in coding differences from 2010 to 2012 stabilized at the same level as 2009-2010. We provided this lower estimate to account for the possibility that the growth rate will flatten as MA plans improve their systems for comprehensively coding diagnoses. Our updated methodology includes two major enhancements to the methodology used for our January 2012 report. First, we altered our methodology to make a more conservative assumption of the risk score growth due to coding differences for beneficiaries that recently enrolled in MA without any prior Medicare FFS enrollment. These beneficiaries are new to the Medicare program and tend to be healthier than other beneficiaries. Therefore, they may have less contact with their health plan physicians and less information in their medical records that plans could use to adjust risk scores. As a result, for MA beneficiaries that had a risk score based on MA coding for the second year of the period but did not have a risk score based on either MA or FFS coding for the first year of the period, we altered our methodology to assume that these beneficiaries had zero risk score growth due to coding differences during the period. For the January 2012 report, we had assumed that these beneficiaries experienced the same risk score growth due to coding differences, on average, as did those beneficiaries for which we could measure risk score growth due to coding differences. Second, we adjusted our cumulative impact estimates for 2011 and 2012 to account for the effect of beneficiaries who died after June 30, 2010, or otherwise left MA since 2010. We determined an adjustment was needed after observing that the impacts of coding differences for the time periods 2005-2006, 2006-2007, and 2007-2008 were smaller for the 2010 retrospective cohort used for this study than for the 2008 retrospective cohort used for our January 2012 report. We attributed these differences to beneficiaries who were in our 2008 cohort but not our 2010 cohort as having, on average, greater differences between their actual and estimated disease scores than those beneficiaries who remained in MA. It is possible this is due to sicker beneficiaries with greater coding differences relative to Medicare FFS leaving the study cohort as they aged, either because of death or to rejoin Medicare FFS. We expect that beneficiaries who leave MA or die after 2010 would have a similar effect on our 2011 and 2012 projections. To address this, we adjusted our estimates of the impact of coding differences on the basis of the magnitude of differences in the impact of coding differences for our 2008 and 2010 retrospective cohorts during each period. We applied these adjustments relative to the projection year (i.e., half of the differences between the 2008 and 2010 cohorts for periods 2005-2010 were applied to 2006-2011 for our 2011 projection and the full difference between the cohorts was applied to 2007-2012 for our 2012 projection). To quantify the impact of both our and CMS’s estimates of coding differences on payments to MA plans in 2010, 2011, and 2012, we used data on MA plan bids—plans’ proposed reimbursement rates for the average beneficiary—which are used to determine payments to MA plans and information on MA plan enrollment. We analyzed monthly MA enrollment for 2010 and 2011, and because enrollment data for 2012 was incomplete, we used the annual trend for these 2 years in conjunction with actual enrollment for the first 9 months of 2012 to estimate total 2012 MA enrollment. We used these data to calculate total risk-adjusted payments for each MA plan before and after applying a coding adjustment, and then used the differences between these payment levels to estimate the percentage reduction in total projected payments to MA plans resulting from adjustments for coding differences in 2010, 2011, and 2012.associated with each adjustment to the estimated total payments to MA plans—$114.8 billion in 2010, $122.9 billion in 2011, and $133.5 billion in 2012—and accounted for reduced Medicare Part B premium payments Then we applied the percentage reduction in payments received by CMS, which offset the reduction in MA payments (see table 1). We analyzed data from CMS on Medicare beneficiaries, including data collected from Medicare providers and MA plans. We assessed the reliability of the CMS data we used by interviewing officials responsible for using these data to determine MA payments, reviewing relevant documentation, and examining the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of our study. In addition to the contact named above, Christine Brudevold, Assistant Director; Alison Binkowski; William A. Crafton; Gregory Giusto; Andrew Johnson; Richard Lipinski; Elizabeth Morrison; and Jennifer Whitworth made key contributions to this report.
CMS pays plans in MA--the private plan alternative to FFS--a predetermined amount per beneficiary adjusted for health status. To make this adjustment, CMS calculates a risk score, a relative measure of expected health care for each beneficiary. Risk scores should be the same among all beneficiaries with the same health conditions and demographic characteristics. Differences in diagnostic coding between MA plans and Medicare FFS led to inappropriately high MA risk scores and payments to MA plans, and CMS adjusted for coding differences in 2010. In January 2012, GAO reported that CMS's adjustments to risk scores did not sufficiently correct for coding differences, resulting in excess payments to MA plans. Since completing the analysis for the January 2012 report, risk score data for two additional years have become available. GAO (1) determined the extent to which differences, if any, in diagnostic coding between MA plans and Medicare FFS affected MA risk scores and payments to MA plans in 2010, 2011, and 2012; and (2) identified what changes, if any, CMS made to its risk score adjustment methodology for 2013 and intends to make for future years. To do this, GAO compared risk score growth for MA beneficiaries with an estimate of what risk score growth would have been for those beneficiaries if they were in Medicare FFS for 2010 and projected the growth to 2011 and 2012, and determined if there were changes to CMS's methodology by reviewing agency documentation and interviewing agency officials. GAO found that the cumulative impact of coding differences on risk scores increased from 2010 through 2012 and was greater than the Centers for Medicare & Medicaid Services' (CMS) risk score adjustment of 3.4 percent for each of the 3 years. In updating the analysis from its January 2012 report, GAO estimated that cumulative Medicare Advantage (MA) risk scores in 2010 were 4.2 percent higher than they likely would have been if the same beneficiaries had been enrolled continuously in Medicare fee-for-service (FFS). For 2011, GAO estimated that differences in diagnostic coding resulted in risk scores that were 4.6 to 5.3 percent higher than they likely would have been if the same beneficiaries had been continuously enrolled in FFS. This upward trend continued for 2012, with estimated risk scores 4.9 to 6.4 percent higher. CMS's adjustment to risk scores for 2010 through 2012 to account for diagnostic coding differences was too low, resulting in estimated excess payments to MA plans of at least $3.2 billion. CMS's annual 3.4 percent reduction in risk scores is equivalent to $2.8 billion in 2010, $3.0 billion in 2011 and $3.2 billion in 2012. According to GAO's estimates, the amount of the excess payments to MA plans after accounting for CMS's adjustments was $0.6 billion in 2010, between $1.1 billion and $1.6 billion in 2011, and between $1.5 billion and $2.9 billion in 2012. Cumulatively across the 3 years, this equals excess payments of between $3.2 billion and $5.1 billion. For 2013, CMS continues to use the risk score adjustment of 3.4 percent it used in 2010, 2011, and 2012. To conduct its data-based analysis, CMS officials reported that they used the same methodology used in 2010, but they incorporated more recent data. CMS officials told us that, in addition to the results of the data analysis, they incorporated additional factors such as recent payment changes made to the MA program under the Patient Protection and Affordable Care Act and the maintenance of benefits for seniors. The Social Security Act does not prescribe CMS's methodology for adjusting for differences in diagnostic coding. However, the express purpose of the requirements to conduct and incorporate into the risk scores a data-based analysis of coding differences is to ensure payment accuracy. The act does not provide for factors other than the results of the analysis to be incorporated into the adjustment, suggesting that accuracy would be ensured solely through the incorporation of analytical results. CMS officials stated that they believed there was policy discretion with respect to the most appropriate adjustment factor but did not identify the specific source of their authority to consider factors other than the required data analysis when determining the adjustment amount. While CMS did not change its risk score adjustment methodology for 2013, agency officials said they may revisit their methodology for future years. GAO's findings underscore the importance for CMS to implement the recommendation from GAO's January 2012 report that the agency improve the accuracy of its MA risk score adjustments by taking steps such as using the most current data available and incorporating adjustments for additional beneficiary characteristics. CMS reviewed a draft of this report and stated that it had no comments.
Under the existing, or “legacy” system, the military’s disability evaluation process begins at a military treatment facility when a physician identifies a condition that may interfere with a servicemember’s ability to perform his or her duties. On the basis of medical examinations and the servicemember’s medical records, a medical evaluation board (MEB) identifies and documents any conditions that may limit a servicemember’s ability to serve in the military. The servicemember’s case is then evaluated by a physical evaluation board (PEB) to make a determination of fitness or unfitness for duty. If the servicemember is found to be unfit due to medical conditions incurred in the line of duty, the PEB assigns the servicemember a combined percentage rating for those unfit conditions, and the servicemember is discharged. Depending on the overall disability rating and number of years of active duty or equivalent service, the servicemember found unfit with compensable conditions is entitled to either monthly disability retirement benefits or lump sum disability severance pay. In addition to receiving disability benefits from DOD, veterans with service-connected disabilities may receive compensation from VA for lost earnings capacity. VA’s disability compensation claims process starts when a veteran submits a claim listing the medical conditions that he or she believes are service-connected. In contrast to DOD’s disability evaluation system, which evaluates only medical conditions affecting servicemembers’ fitness for duty, VA evaluates all medical conditions claimed by the veteran, whether or not they were previously evaluated in DOD’s disability evaluation process. For each claimed condition, VA must determine if there is credible evidence to support the veteran’s contention of a service connection. Such evidence may include the veteran’s military service records and treatment records from VA medical facilities and private medical service providers. Also, if necessary for reaching a decision on a claim, VA arranges for the veteran to receive a medical examination. Medical examiners are clinicians (including physicians, nurse practitioners, or physician assistants) certified to perform the exams under VA’s Compensation and Pension program. Once a claim has all of the necessary evidence, a VA rating specialist determines whether the claimant is eligible for benefits. If so, the rating specialist assigns a percentage rating. If VA finds that a veteran has one or more service- connected disabilities with a combined rating of at least 10 percent, the agency will pay monthly compensation. In November 2007, DOD and VA began piloting the IDES. The IDES merges DOD and VA processes, so that servicemembers begin their VA disability claim while they undergo their DOD disability evaluation, rather than sequentially, making it possible for them to receive VA disability benefits shortly after leaving military service. Specifically, the IDES: Merges DOD and VA’s separate exam processes into a single exam process conducted to VA standards. This single exam (which may involve more than one medical examination, for example, by different specialists), in conjunction with the servicemembers’ medical records, is used by military service PEBs to make a determination of servicemembers’ fitness for continued military service, and by VA as evidence of service-connected disabilities. The exam may be performed by medical staff working for VA, DOD, or a private provider contracted with either agency. Consolidates DOD and VA’s separate rating phases into one VA rating phase. If the PEB has determined that a servicemember is unfit for duty, VA rating specialists prepare two ratings—one for the conditions that DOD determined made a servicemember unfit for duty, which DOD uses to provide military disability benefits, and the other for all service-connected disabilities, which VA uses to determine VA disability benefits. Provides VA case managers to perform outreach and nonclinical case management and explain VA results and processes to servicemembers. In August 2010, DOD and VA officials issued an interim report to Congress summarizing the results of their evaluation of the IDES pilot as of early 2010 and indicating overall positive results. In that report, the agencies concluded that, as of February 2010, servicemembers who went through the IDES pilot were more satisfied than those who went through the legacy system, and that the IDES process met the agencies’ goals of delivering VA benefits to active duty servicemembers within 295 days and to reserve component servicemembers within 305 days. Furthermore, they concluded that the IDES pilot has achieved a faster processing time than the legacy system, which they estimated to be 540 days. Although DOD and VA’s evaluation results indicated promise for the IDES, the extent to which they represented an improvement over the legacy system could not be known because of limitations in the legacy data. DOD and VA’s estimate of 540 days for the legacy system was based on a small, nonrepresentative sample of cases. Officials planned to use a broader sample of legacy cases to compare against pilot cases with respect to processing times and appeal rates; however inconsistencies in how military services tracked information and missing VA information (i.e., on the date VA benefits were delivered) for legacy cases, precluded such comparisons. While our review of DOD and VA’s data and reports generally confirmed DOD and VA’s findings as of early 2010, we found that not all of the service branches were achieving the same results, case processing times increased between February and August 2010, and other agency goals are not being met. Since our December report, processing times have worsened further and the agencies have adjusted some goals downward. Servicemember satisfaction: Our reviews of the survey data as of early 2010 indicated that, on average, servicemembers in the IDES pilot had higher satisfaction levels than those who went through the legacy process. However, Air Force members—who represented a small proportion (7 percent) of pilot cases—were less satisfied. Currently, DOD and VA have an 80-percent IDES satisfaction goal, which has not been met. For example, 67 percent of servicemembers surveyed in March 2011 were satisfied with the IDES. Satisfaction by service ranged from 54 percent for the Marine Corps to 72 percent for the Army. Average case processing times: Although the agencies were generally meeting their 295-day and 305-day timeliness goals through February 2010, the average case processing time for active duty servicemembers increased from 274 to 296 days between February and August 2010. Among the military service branches, only the Army was meeting the agencies’ timeliness goals as of August, while average processing times for each of the other services exceeded 330 days. Since August 2010, timeliness has worsened significantly. For example, active component cases completed in March 2011 took an average of 394 days—99 days over the 295-day target. By service, averages ranged from 367 days for the Army to 455 days for the Marine Corps. Meanwhile, Reserve cases took an average of 383 days, 78 days more than the 305-day target, while Guard cases took an average of 354 days, 49 days more than the target. Goals to process 80 percent of cases in targeted time frames: DOD and VA had indicated in their planning documents that they had goals to deliver VA benefits to 80 percent of servicemembers within the 295-day (active component) and 305-day (reserve component) targets. For both active and reserve component cases at the time of our review, about 60 percent were meeting the targeted time frames. DOD and VA have since lowered their goals for cases completed on time, from 80 percent to 50 percent. Based on monthly data for 6 months through March 2011, the new, lower goal was not met during any month for active component cases. For completed Reserve cases, the lower goal was met during one of the 6 months and for Guard cases, it was met in 2 months. The strongest performance was in October 2010 when 63 percent of Reserve cases were processed within the 305-day target. Based on our prior work, we found that--as DOD and VA tested the IDES at different facilities and added cases to the pilot--they encountered several challenges that led to delays in certain phases of the process. Staffing: Most significantly, most of the sites we visited reported experiencing staffing shortages and related delays to some extent, in part due to workloads exceeding the agencies’ initial estimates. The IDES involves several different types of staff across several different DOD and VA offices, some of which have specific caseload ratios set by the agencies, and we learned about insufficient staff in many key positions. With regard to VA positions, officials cited shortages in examiners for the single exam, rating staff, and case managers. With regard to DOD positions, officials cited shortages of physicians who serve on the MEBs, PEB adjudicators, and DOD case managers. In addition to shortages cited at pilot sites, DOD data indicated that 19 of the 27 pilot sites did not meet DOD’s caseload target of 30 cases per manager. Local DOD and VA officials attributed staffing shortages to higher than anticipated caseloads and difficulty finding qualified staff, particularly physicians, in rural areas. These staffing shortages contributed to delays in the IDES process. Two of the sites we visited—Fort Carson and Fort Stewart—were particularly challenged to provide staff in response to surges in caseload that occurred when Army units were preparing to deploy to combat zones. Through the Army’s predeployment medical assessment process, large numbers of servicemembers were determined to be unable to deploy due to a medical condition and were referred to the IDES within a short period of time, overwhelming the staff. These two sites were unable to quickly increase staffing levels, particularly of examiners. As a result, at Fort Carson, it took 140 days on average to complete the single exam for active duty servicemembers, as of August 2010—much longer than the agencies’ goal to complete the exams in 45 days. More recently, Fort Carson was still struggling to meet goals, as of March 2011. For example, about half of Fort Carson’s active component cases (558 of 1033 cases) were in the MEB phase, and the average number of days spent in the MEB phase by active component cases completed in March 2011 was 197 days, compared to a goal of 35 days. Exam summaries: Issues related to the completeness and clarity of single exam summaries were an additional cause of delays in the VA rating phase of the IDES process. Officials from VA rating offices said that some exam summaries did not contain information necessary to determine a rating. As a result, VA rating office staff must ask the examiner to clarify these summaries and, in some cases, redo the exam. VA officials attributed the problems with exam summaries to several factors, including the complexity of IDES pilot cases, the volume of exams, and examiners not receiving records of servicemembers’ medical history in time. The extent to which insufficient exam summaries caused delays in the IDES process is unknown because DOD and VA’s case tracking system for the IDES does not track whether an exam summary has to be returned to the examiner or whether it has been resolved. Medical diagnoses: While the single exam in the IDES eliminates duplicative exams performed by DOD and VA in the legacy system, it raises the potential for there to be disagreements about diagnoses of servicemembers’ conditions. For example, officials at Army pilot sites informed us about cases in which a DOD physician had treated members for mental disorders, such as major depression. However, when the members went to see the VA examiners for their single exam, the examiners diagnosed them with posttraumatic stress disorder (PTSD). Officials told us that attempting to resolve such differences added time to the process and sometimes led to disagreements between DOD’s PEBs and VA’s rating offices about what the rating should be for purposes of determining DOD disability benefits. Although the Army developed guidance to help resolve diagnostic differences, other services have not. Moreover, PEB officials we spoke with noted that there is no guidance on how disagreements about servicemembers’ ratings between DOD and VA should be resolved beyond the PEBs informally requesting that the VA rating office reconsider the case. While DOD and VA officials cited several potential causes for diagnostic disagreements, the number of cases with disagreements about diagnoses and the extent to which they have increased processing time are unknown because the agencies’ case tracking system does not track when a case has had such disagreements. Logistical challenges integrating VA staff at military treatment facilities: DOD and VA officials at some pilot sites we visited said that they experienced logistical challenges integrating VA staff at the military facilities. At a few sites, it took time for VA staff to receive common access cards needed to access the military facilities and to use the facilities’ computer systems, and for VA physicians to be credentialed. DOD and VA staff also noted several difficulties using the agencies’ multiple information technology (IT) systems to process cases, including redundant data entry and a lack of integration between systems. Housing and other challenges posed by extended time in the military disability evaluation process: Although many DOD and VA officials we interviewed at central offices and pilot sites felt that the IDES process expedited the delivery of VA benefits to servicemembers, several also indicated that it may increase the amount of time servicemembers are in the military’s disability evaluation process. Therefore, some DOD officials noted that servicemembers must be cared for, managed, and housed for a longer period. The military services may move some servicemembers to temporary medical units or to special medical units such as Warrior Transition Units in the Army or Wounded Warrior Regiments in the Marine Corps, but at a few pilot sites we visited, these units were either full or members in the IDES did not meet their admission criteria. In addition, officials at two sites said that members who are not gainfully employed by their units and left idle are more likely to be discharged due to misconduct and forfeit their disability benefits. However, DOD officials also noted that servicemembers benefit from continuing to receive their salaries and benefits while their case undergoes scrutiny by two agencies, though some also acknowledged that these additional salaries and benefits create costs for DOD. DOD and VA are deploying the IDES to military facilities worldwide on an ambitious timetable—expecting deployment to be completed at a total of about 140 sites by the end of fiscal year 2011. As of March 2011, the IDES was operating at 73 sites, covering about 66 percent of all military disability evaluation cases. In preparing for IDES expansion militarywide, DOD and VA had many efforts under way to address challenges experienced at the 27 pilot sites. For example, the agencies completed a significant revision of their site assessment matrix—a checklist used by local DOD and VA officials to ascertain their readiness to begin the pilot—to address areas where prior IDES sites had experienced challenges. In addition, local senior-level DOD and VA officials will be expected to sign the site assessment matrix to certify that a site is ready for IDES implementation. This differs from the pilot phase where, according to DOD and VA officials, some sites implemented the IDES without having been fully prepared. Through the new site assessment matrix and other initiatives, DOD and VA planned to address several of the challenges identified in the pilot phase. Ensuring sufficient staff: With regard to VA staff, VA planned to increase the number of examiners by awarding a new contract through which sites can acquire additional examiners. To increase rating staff, VA filled vacant rating specialist positions and anticipates hiring a small number of additional staff. With regard to DOD staff, Air Force and Navy officials told us they added adjudicators for their PEBs or planned to do so. Both DOD and VA indicated they plan to increase their numbers of case managers. Meanwhile, sites are being asked in the assessment matrix to provide longer and more detailed histories of their caseloads, as opposed to the 1-year history that DOD and VA had based their staffing decisions on during the pilot phase. The matrix also asks sites to anticipate any surges in caseloads and to provide a written contingency plan for dealing with them. Ensuring the sufficiency of single exams: VA has been revising its exam templates to better ensure that examiners include the information needed for a VA disability rating decision and to enable them to complete their exam reports in less time. VA is also examining whether it can add capabilities to the IDES case tracking system that would enable staff to identify where problems with exams have occurred and track the progress of their resolution. Ensuring adequate logistics at IDES sites: The site assessment matrix asks sites whether they have the logistical arrangements needed to implement the IDES. In terms of information technology, DOD and VA were developing a general memorandum of agreement intended to enable DOD and VA staff access to each other’s IT systems. DOD officials also said that they are developing two new IT solutions—one intended to help military treatment facilities better manage their cases, another intended to reduce multiple data entry. However, in some areas, DOD and VA’s efforts to prepare for IDES expansion did not fully address some challenges or are not yet complete. For these areas, we recommended additional action that the agencies could take, with which the agencies generally concurred. Ensuring sufficient DOD MEB physician staffing: DOD does not yet have strategies or plans to address potential shortages of physicians to serve on MEBs. For example, the site assessment matrix does not include a question about the sufficiency of military providers to handle expected numbers of MEB cases at the site, or ask sites to identify strategies for ensuring sufficient MEB physicians if there is a caseload surge or staff turnover. We recommended that, prior to implementing IDES at MTFs, DOD direct military services to conduct thorough assessments of the adequacy of military physician staffing for completing MEB determinations and develop contingency plans to address potential shortfalls, e.g. due to staff turnover or caseload surges. Ensuring sufficient housing and organizational oversight for IDES participants: Although the site assessment matrix asks sites whether they will have sufficient temporary housing available for servicemembers going through the IDES, the matrix requires only a yes or no response and does not ensure that sites will have conducted a thorough review of their housing capacity. In addition, the site assessment matrix does not address plans for ensuring that IDES participants are gainfully employed or sufficiently supported by their organizational units. We recommended that prior to implementing the IDES at MTFs, DOD ensure thorough assessments are conducted on the availability of housing for servicemembers and on the capacity of organizational units to absorb servicemembers undergoing the disability evaluation; alternative housing options are identified when sites lack adequate capacity; and plans are in place for ensuring that servicemembers are appropriately and constructively engaged. Addressing differences in diagnoses: According to agency officials, DOD is currently developing guidance on how staff should address differences in diagnoses. However, since the new guidance and procedures are still being developed, we cannot determine whether they will aid in resolving discrepancies or disagreements. Significantly, DOD and VA do not have a mechanism for tracking when and where disagreements about diagnoses and ratings occur and, consequently, may not be able to determine whether the guidance sufficiently addresses the discrepancies. Therefore, we recommended that DOD and VA conduct a study to assess the prevalence and causes of such disagreements and establish a mechanism to continuously monitor diagnostic disagreements. VA has since indicated it plans to conduct such a study and make a determination by July 2011 regarding what, if any, mechanisms are needed. Further, despite regular reporting of data on caseloads, processing times, and servicemember satisfaction, and preparation of an annual report on challenges in the IDES, we determined that DOD and VA did not have a systemwide monitoring mechanism to help ensure that steps they took to address challenges are sufficient and to identify problems in a more timely basis. For example, they did not collect data centrally on staffing levels at each site relative to caseload. As a result, DOD and VA may be delayed in taking corrective action since it takes time to assess what types of staff are needed at a site and to hire or reassign staff. DOD and VA also lacked mechanisms or forums for systematically sharing information on challenges, as well as best practices between and among sites. For example, DOD and VA have not established a process for local sites to systematically report challenges to DOD and VA management and for lessons learned to be systematically shared systemwide. During the pilot phase, VA surveyed pilot sites on a monthly basis about challenges they faced in completing single exams. Such a practice has the potential to provide useful feedback if extended to other IDES challenges. To identify challenges as they arise in all DOD and VA facilities and offices involved in the IDES and thereby enable early remedial action, we recommended that DOD and VA develop a systemwide monitoring mechanism. This system could include continuous collection and analysis of data on DOD and VA staffing levels, sufficiency of exam summaries, and diagnostic disagreements; monitoring of available data on caseloads and case processing time by individual VA rating office and PEB; and a formal mechanism for agency officials at local DOD and VA facilities to communicate challenges and best practices to DOD and VA headquarters. VA noted several steps it plans to take to improve its monitoring of IDES workloads, site performance and challenges—some targeted to be implemented by July 2011—which we have not reviewed. By merging two duplicative disability evaluation systems, the IDES has shown promise for expediting the delivery of VA benefits to servicemembers leaving the military due to a disability. However, we identified significant challenges at pilot sites that require careful management attention and oversight. We noted a number of steps that DOD and VA were undertaking or planned to undertake that may mitigate these challenges. However, the agencies’ deployment schedule is ambitious in light of substantial management challenges and additional evidence of deteriorating case processing times. As such, it is unclear whether these steps will be sufficiently timely or effective to support militarywide deployment. Deployment time frame notwithstanding, we continue to believe that the success or failure of the IDES will depend on DOD and VA’s ability to quickly and effectively address resource needs and resolve challenges as they arise, not only at the initiation of the transition to IDES, but also on an ongoing, long-term basis. We continue to believe that DOD and VA cannot achieve this without a robust mechanism for routinely monitoring staffing and other risk factors. Chairman Chaffetz and Ranking Member Tierney, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the individual named above, key contributors to this testimony include Michele Grgich, Greg Whitney, and Daniel Concepcion. Key advisors included Bonnie Anderson, Mark Bird, Sheila McCoy, Patricia Owens, Roger Thomas, Walter Vance, and Randall Williamson. Military and Veterans Disability System: Pilot Has Achieved Some Goals, but Further Planning and Monitoring Needed. GAO-11-69. Washington, D.C.: December 6, 2010. Military and Veterans Disability System: Preliminary Observations on Evaluation and Planned Expansion of DOD/VA Pilot. GAO-11-191T. Washington, D.C.: November 18, 2010. Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Recovering Servicemembers: DOD and VA Have Jointly Developed the Majority of Required Policies but Challenges Remain. GAO-09-728. Washington, D.C.: July 8, 2009. Recovering Servicemembers: DOD and VA Have Made Progress to Jointly Develop Required Policies but Additional Challenges Remain. GAO-09-540T. Washington, D.C.: April 29, 2009. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T. Washington, D.C.: February 27, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers. GAO-07-1256T Washington, D.C.: September 26, 2007. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the efforts by the Departments of Defense (DOD) and Veterans Affairs (VA) to integrate their disability evaluation systems. Wounded warriors unable to continue their military service must navigate DOD's and VA's disability evaluation systems to be assessed for eligibility for disability compensation from the two agencies. GAO and others have found problems with these systems, including long delays, duplication in DOD and VA processes, confusion among servicemembers, and distrust of systems regarded as adversarial by servicemembers and veterans. To address these problems, DOD and VA have designed an integrated disability evaluation system (IDES), with the goal of expediting the delivery of VA benefits to servicemembers. After pilot testing the IDES at an increasing number of military treatment facilities (MTF)--from 3 to 27 sites--DOD and VA are in the process of deploying it worldwide. As of March 2011, the IDES has been deployed at 73 MTFs--representing about 66 percent of all military disability evaluation cases--and worldwide deployment is scheduled for completion in September 2011. This testimony summarizes and updates our December 2010 report on the IDES and addresses the following points: (1) the results of DOD and VA's evaluation of their pilot of the IDES, including updated data as of March 2011 from IDES monthly reports, where possible; (2) challenges in implementing the piloted system to date; and (3) DOD and VA's plans to expand the piloted system and whether those plans adequately address potential challenges. In summary, DOD and VA concluded that, based on their evaluation of the pilot as of February 2010, the pilot had (1) improved servicemember satisfaction relative to the existing "legacy" system and (2) met their established goal of delivering VA benefits to active duty and reserve component servicemembers within 295 and 305 days, respectively, on average. However, 1 year after this evaluation, average case processing times have increased significantly, such that active component servicemembers' cases completed in March 2011 took an average of 394 days to complete--99 days more than the 295-day goal. In our prior work, we identified several implementation challenges that had already contributed to delays in the process. The most significant challenge was insufficient staffing by DOD and VA. Staffing shortages and process delays were particularly severe at two pilot sites we visited where the agencies did not anticipate caseload surges. The single exam posed other challenges that contributed to delays, such as disagreements between DOD and VA medical staff about diagnoses for servicemembers' medical conditions that often required further attention, adding time to the process. Pilot sites also experienced logistical challenges, such as incorporating VA staff at military facilities and housing and managing personnel going through the process. DOD and VA were taking or planning to take steps to address a number of these challenges. For example, to address staffing shortages, VA is developing a contract for additional medical examiners, and DOD and VA are requiring local staff to develop written contingency plans for handling caseload surges. Given increased processing times, the efficacy of these efforts at this time is unclear. We recommended additional steps the agencies could take to address known challenges--such as establishing a comprehensive monitoring plan for identifying problems as they occur in order to take remedial actions as early as possible--with which DOD and VA generally concurred.
The Welfare Reform Act gives the states the opportunity and authority to streamline their operations by allowing them to experiment with ways to standardize their eligibility and benefit requirements for households participating in both TANF and the Food Stamp Program. Specifically, the act provided the states with the option of operating the Simplified Food Stamp Program. This program allows them to establish eligibility and benefit levels on the basis of household size and income, work requirements, and other criteria established under TANF, food stamps, or a combination of both programs—as long as federal costs are not increased in doing so. The TANF block grant, administered by the U.S. Department of Health and Human Services, helped the states provide assistance to about 3.9 million needy families with children in fiscal year 1997. Among other things, the grant is intended to allow children to be cared for in their own homes and to reduce welfare dependency by promoting job preparation and work. The fixed amounts of the states’ TANF grants under the Welfare Reform Act are based on the amount of the grants they received in specified fiscal years under prior law, adjusted for population increases under certain circumstances. For fiscal year 1997, the federal grants available to the states totaled $16.7 billion and ranged from $21.8 million in Wyoming to over $3.7 billion in California. With respect to state funding, the Welfare Reform Act included a “maintenance-of-effort” provision requiring the states to provide 75 to 80 percent of their historic level of funding. Because of congressional concern that welfare had become a way of life for some participants, a key purpose of the new law is to promote work over welfare and self-reliance over dependency. In support of this goal, the law provides that the states must require able-bodied participants to engage in work or work-related activities and must impose a 5-year lifetime limit on federal assistance. The states must require adults in families receiving TANF assistance to participate in work or work-related activities after receiving assistance for 24 months, or sooner, as defined by the state. If recipients fail to participate as required, the states must at least reduce the families’ assistance and may opt to terminate it entirely. To avoid financial penalties, the states must ensure that a certain specified minimum percentage of their beneficiaries are participating in work or work-related activities each year. These percentages are referred to as “minimum mandated participation rates.” To count toward these mandated rates, adults in families receiving welfare must participate for a certain minimum number of hours in work or a work-related activity as prescribed in the Welfare Reform Act—such as job readiness workshops; on-the-job training; and, under certain circumstances, education. The required number of hours of participation and the percentage of a state’s caseload that must participate to meet mandated rates increase over time are shown in table 1. Instead of prescribing in detail how the states should structure their TANF programs, the Welfare Reform Act authorizes the states to use their block grants in any manner that is reasonably calculated to accomplish the purposes of TANF. For example, the states are allowed to set forth their own criteria for defining who will be eligible and what assistance and services will be available, provided they ensure fair and equitable treatment. The states may opt to deny assistance altogether for noncitizens, drug felons, minor teen parents, or those they determine are able to work. The states may also choose when to require adults to participate in work activities, what types of activities are allowed, who should receive a participation waiver, and whether to terminate grants to entire families for noncompliance. The Food Stamp Program is the cornerstone of federal food assistance programs. It is an entitlement program that helped put food on the table for about 8.3 million households each day during 1997 at a federal cost of about $19.5 billion. It provides low-income households with paper coupons or electronic benefits that can be redeemed for food in about 200,000 authorized stores across the nation. U.S. citizens and some legal immigrants who are admitted for permanent residency may qualify for food stamps. Household eligibility and benefit amounts are based on nationwide federal criteria, including household size and income; eligibility is also based on assets, housing costs, work requirements, and other factors. The program operates in the 50 states, the District of Columbia, Guam and the U.S. Virgin Islands. FNS administers the program in cooperation with state agencies. FNS is responsible for approving state plans for operation and ensuring that the states are administering the program in accordance with regulations. The program is administered at the local level by either a state agency or a local welfare agency, depending on the state. Local office staff are responsible for determining eligibility and the level of benefits issued to participating households. The federal government pays the full cost of benefits and shares administrative costs with the states. The federal cost to administer the program was about $1.7 billion in fiscal year 1997, and the average monthly household food stamp benefit was $169. The Welfare Reform Act requires most able-bodied adult participants receiving food stamps to work in return for their benefits. Able-bodied participants ages 18 to 50 with no dependent children may receive food stamp benefits for only 3 months in every 36-month period unless they are engaged in work or work programs. Qualifying work includes participating in a work program for a monthly average of 20 hours or more a week. Exclusions to the work requirement vary by program. For example, single parents with children under the age of 6 may be exempted from the food stamp work requirement, while TANF allows the states to exempt single parents with children under the age of 1. The Simplified Food Stamp Program allows the states to determine eligibility and benefits under one set of program criteria for families receiving assistance from both the Food Stamp Program and TANF. Under the simplified program, the states can merge their TANF and food stamp rules into a single set of eligibility and benefit requirements and control the extent to which the program’s rules are merged. Some states, for example, may elect to implement a “full” simplified program in which an extensive set of TANF requirements—such as income and asset limits—are adopted to determine eligibility. Other states may adopt a more limited, or “mini,” simplified program—incorporating only TANF’s work requirement into their simplified program. With a mini program, the states can administratively combine the value of their TANF and food stamp benefits in calculating whether participants are being provided with benefit amounts that are at least equal to the minimum wage—currently $5.15 an hour—multiplied by the number of hours they are required to work. (App. III describes in more detail how the adoption of a simplified program helps the states to comply with the requirement to provide the minimum wage.) The Welfare Reform Act requires the states to retain several features of the regular Food Stamp Program, such as issuance procedures and the use of the Thrifty Food Plan as the basis of benefits in their simplified program. The states must also obtain FNS’ approval to implement a simplified program. To establish a program, the states must demonstrate that their simplified program will not cost the federal government more than the Food Stamp Program would have cost for the affected participants in any fiscal year—that is, the program has to be cost neutral. The simplified program’s cost-neutrality requirement is more restrictive than the cost-neutrality requirement approved for some welfare reform demonstration projects, which have allowed states to calculate cost neutrality over several years. In response to our July 1998 survey, seven states—Arizona, Delaware, Georgia, Idaho, Mississippi, New Jersey, and New York—reported that they had implemented a simplified program. Each of these states implemented a mini simplified program. Six states—Arkansas, Florida, Illinois, Maine, North Carolina, and South Carolina—indicated that they were planning to implement the simplified program. Since July, one of these states—Arkansas—has implemented it—bringing the total number of states with simplified programs to eight and reducing the number of states planning to implement the program to five. Appendix IV presents the status of each state’s implementation as of July 1998. Unlike the seven states that had implemented a mini simplified program, Arkansas adopted a more extensive simplified program, using a number of TANF requirements to determine eligibility and benefits. For example, under the regular food stamp rules, the portion of the fair market value of a household vehicle that exceeds $4,650 is counted toward the household’s resource limit. However, Arkansas’ simplified program uses a state-defined TANF rule that exempts the total value of the vehicle from the resource limit. Arkansas’ primary goal in adopting the simplified program is to create a “one-stop” process for approving TANF and food stamp benefits, thereby eliminating separate participant interviews, forms, and reporting requirements for each program. The state expects that caseworkers will then be able to spend more time helping program participants find employment. (App. V describes the Arkansas program in more detail.) According to the survey responses, of the five remaining states (excluding Arkansas) planning to implement a simplified program, all but Florida plan to adopt it within the next year. Florida was uncertain about when it would adopt a simplified program. Figure 1 identifies the eight states (including Arkansas) that have implemented a simplified program and the five states that are planning to implement a program. Thirty states are not planning to implement the simplified program, and 9 states—California, Colorado, Connecticut, Massachusetts, New Hampshire, Pennsylvania, Rhode Island, Utah, and the U.S. Virgin Islands—reported that they are uncertain about whether they will implement the simplified program. Thirty five of the 45 states that had not implemented a simplified program indicated that, as currently structured, the simplified program was hardly or not at all helpful in achieving a more efficient and streamlined operation. Nine states indicated that the program was somewhat or moderately helpful, and one state did not indicate whether it was helpful or not. As figure 2 shows, the states that have not implemented a simplified program cited a number of concerns that have discouraged its adoption. The most frequently cited concern for 34 of the states that had not implemented a simplified program was increasing caseworker burden because of an additional set of program criteria. The additional criteria would be produced by the states’ merging of TANF and food stamp provisions under their simplified program. For example, under the simplified program, a state may replace the regular food stamp provision exempting single parents with children under age 6 from work with a TANF provision exempting single parents with children under age 1. Such changes can produce a distinct set of simplified program work requirements not found in either TANF or the regular Food Stamp Program. The next most frequently cited concern for 28 of the states was that the program’s cost-neutral provision restricted the states’ options for simplifying the program. That is, a simplified program that incorporates eligibility or benefit determination criteria that result in higher food stamp benefit costs must be offset by other program features that reduces benefit costs to what they would have been under the regular Food Stamp Program—in effect, requiring the states to make program design trade-offs to achieve cost neutrality. Finally, the third most frequently cited factor discouraging program implementation for 24 of the states was that other Welfare Reform Act requirements had a higher priority. For example, the Welfare Reform Act required the states to submit their TANF plans for federal approval and begin implementing the program by July 1, 1997. In contrast, the states were under no deadline or requirement to develop and implement a simplified program. Some of the other frequently cited concerns that discouraged states from implementing the program included the states’ difficulties in aligning food stamp requirements with TANF, the absence of state automation to support program implementation, and the potential increase in the states’ food stamp error rates, which would result in financial penalties. (App. VI discusses our review of North Dakota, which had obtained FNS’ approval to implement a full simplified program and subsequently decided against it for many of the same concerns cited by other states.) Thirty-four of the states that had not implemented a simplified program suggested ways to improve the program to make its adoption more attractive to the states. Suggestions included extending the simplified program to all food stamp households, eliminating or relaxing the cost-neutrality requirement, and providing a moratorium on the financial penalties associated with increased food stamp error rates. The states’ suggestions are summarized in appendix VII. Most states did not expect the simplified program to affect the level of participation in the Food Stamp Program or the benefits provided. As figure 3 shows, according to 6 of the 7 states that had implemented a mini simplified program and 32 of the states that had not implemented a simplified program, the program would have little or no impact on the number of households receiving food stamps. Similarly, as figure 4 shows, the 7 states that had implemented a mini simplified program and 24 of the states that had not implemented a simplified program reported that the program would have little or no impact on the benefit amounts received by participants. According to Georgia’s food stamp director, the simplified program has had minimal impact on participation or benefit levels because most of the state’s TANF households are already receiving food stamps. Food stamp officials in Idaho told us that the minimal impact on food stamp participation and benefit levels in their state is due to the relatively small number of households that participate in the simplified program compared with the state’s total food stamp population. The states also assessed other impacts that would occur as a result of the implementation of a simplified program. For example, 35 states indicated that there would be little or no change in their TANF or Medicaid costs resulting from implementation of the simplified program, whereas 24 states reported that their food stamp administrative costs would increase. (See fig. 5.) Although the Welfare Reform Act was, in part, intended to simplify the administration of welfare assistance, this goal is not being achieved with the simplified program. Most states have not implemented the simplified program, and, under current legislation, few states are likely to take full advantage of this option. While the legislation seems to offer the states a great degree of flexibility in designing a simplified program, its restrictive cost-neutrality provision, potential to increase caseworkers’ burden, and difficulties in aligning strict food stamp rules with different state-developed TANF regulations, leave the states with little incentive for adopting the program. If the simplified program’s authorizing legislation remains unchanged, the program will continue to be of little value to the states—with the exception of the flexibility it gives them to combine the value of their TANF and food stamp benefits in order to meet the minimum wage requirements for workfare participants. To fulfill the Simplified Food Stamp Program’s potential, the Congress may wish to consider working with USDA to develop modifications to the legislation that established the program in order to address the implementation concerns discussed in this report. Among other things, the Congress could consider providing the states with more flexibility in meeting the simplified program’s cost-neutrality requirement by, for example, allowing a longer period of time to measure cost neutrality, instead of annually as is required by the current legislation. The Congress could also examine the feasibility of granting the states a grace period in which increased error rates attributed to the implementation of the simplified program are exempted from financial penalties. We provided USDA with a copy of a draft of this report for review and comment. We met with officials from the Food and Nutrition Service, including the Director of Grants Management and the Chief of Food Stamp Program Design, who generally agreed with the facts presented in this report. The Food and Nutrition Service had two overall comments. First, the barriers to implementing the program cited by the states as being of most concern, such as it’s cost neutral provision, are imposed by the program’s authorizing legislation, not by the Food and Nutrition Service or the Food Stamp Program. Second, it will be difficult to identify legislative changes that balance the states’ desire for more flexibility with the Congress’s concern for cost containment and the Food Stamp Program mission of ensuring the nutritional security for low-income families. We agree that the barriers of most concern to the states originate in the program’s authorizing legislation and that legislative actions effectively satisfying both state and congressional concerns may be difficult to develop. Nonetheless, we continue to believe that the actions we identified as matters for congressional consideration, such as granting the states a grace period from financial penalties due to increased error rates, offer the promise of making the program more useful to the states without necessarily increasing federal costs. In addition to its two overall comments, the Food and Nutrition Service suggested technical clarifications to the report, which we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Senate Committee on Agriculture, Nutrition, and Forestry; the House Committee on Agriculture; other interested congressional committees; and the Secretary of Agriculture. We will also make copies available to others upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VIII. In October 1996, the Ranking Minority Member, Subcommittee on Children and Families, Senate Committee on Labor and Human Resources (now known as the Committee on Health, Education, Labor and Pensions), asked us to study several issues concerning the impact of welfare reform on the Food Stamp Program. This report addresses the status and impact of the states’ implementation of the Simplified Food Stamp Program. Specifically, we (1) identify the number of states that have adopted or are planning to adopt the Simplified Food Stamp Program, (2) describe the concerns that may be preventing other states from adopting the simplified program, and (3) examine the impacts that the adoption of the simplified program may have on households’ eligibility and benefits. To address the first objective, we mailed a national survey in July 1998 to 53 state agencies that administer the Food Stamp Program. To encourage their responses to our survey, we used follow-up mailings and telephone calls. We received survey responses from 52 of the 53 states, a response rate of 98 percent. (Iowa declined to participate in our survey.) To address the second and third objectives, we collected pertinent studies, reports, and program literature on the welfare reform changes contained in the 1996 Welfare Reform Act. Our survey also collected information on the actual or anticipated changes in the number of participating food stamp households and officials’ views on whether benefit levels were expected to increase or decrease as a result of the simplified program. We also collected information on the problems or concerns that the states perceive may result from adopting the simplified program. In addition, we interviewed officials at the headquarters of the U.S. Department of Agriculture’s (USDA) Food and Nutrition Service (FNS) and at all of its seven regional offices to collect information on the effect of the simplified program on household participation and benefit levels. We interviewed food stamp officials in the eight states that have adopted the simplified program: Arizona, Arkansas, Delaware, Georgia, Idaho, Mississippi, New Jersey, and New York. We also interviewed food stamp officials in North Dakota—the only state that has decided against implementing the simplified program after receiving program approval from FNS—to obtain information on their assessment process for the adoption of the simplified program. We performed our work from April through December 1998 in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of the state food stamp directors’ responses to our questionnaire. The Department of Labor has determined that under the Fair Labor Standards Act of 1938, as amended, welfare participants who are required to work must receive benefits that are at least equal in value to the minimum wage multiplied by the number of hours worked. Some states that provide low levels of assistance through Temporary Assistance for Needy Families (TANF), such as Mississippi, have found it difficult to require participants to work the hours mandated by the Welfare Reform Act to receive benefits and remain in compliance with the federal minimum wage restriction. For example, a Mississippi household consisting of a single parent and two children received a TANF benefit of $120 in October 1997. The maximum hours that the state could require the parent to work under the minimum wage restriction would have been a little more than 5 hours a week (the product of $120 divided by 4.3 average number of weeks per month divided by the minimum wage of $5.15 equals 5.4 hours)—about 15 hours short of TANF’s 20-hour weekly work requirement in October 1997. According to the Mississippi food stamp director, the state addressed this shortfall in work hours by adopting a “mini” simplified program that allowed it to administratively combine the value of its monthly TANF and food stamp benefits when calculating the maximum number of hours that participating households can be required to work under the minimum wage restriction. In the above example, combining the $120 monthly TANF benefit with the maximum food stamp benefit—at that time, $321 for a family of three—brought the total value of the household’s October 1997 assistance to $441. With the combined benefit amount, the maximum number of hours allowed under the minimum wage restriction increased from about 5 to 20 hours a week. As a result, Mississippi was able to meet TANF’s fiscal year 1998 work requirement for participants without exceeding the maximum number of work hours allowed. Arkansas was the first state to obtain FNS’ approval to implement a simplified program that merges several Food Stamp Program and TANF requirements. Arkansas’ goal in adopting a simplified program is to streamline and simplify the process of applying for food stamp and TANF benefits. By implementing a “seamless” procedure for approving the two benefits at the same time—a “one-stop” process—caseworkers should save time in interviewing clients, completing forms, and reporting requirements. Caseworkers would use the additional available time to help welfare recipients pursue employment opportunities. Reducing time in the clients’ application process would also help decrease the need to hire additional caseworkers to meet the increased responsibilities resulting from implementing welfare reform changes. Arkansas serves about 100,000 households, or 1 percent of the nation’s food stamp household population. This appendix describes the state’s experiences in adopting a simplified program. In 1996, Arkansas began developing proposals for a simplified program with the idea that it would implement the program at the same time as TANF. By March 1997, Arkansas had drafted a plan and contacted FNS for guidance. FNS provided contract assistance to help ensure that the simplified plan was cost neutral, and subsequent analyses showed that Arkansas stayed well within the cost-neutrality requirement. However, with regard to the process of aligning the regulations for the Food Stamp Program and TANF to achieve a simplified program plan, Arkansas’ food stamp officials told us that the lack of FNS regulations and detailed guidance resulted in a learn-as-you-go, ad-hoc approval process that was full of uncertainties. As a result, Arkansas officials were not sure which proposed program changes FNS would approve. According to FNS officials, Arkansas submitted its proposal for a Simplified Food Stamp Program for review and approval in October 1997. FNS officials stated that some of Arkansas’ proposed simplified plan provisions could not be approved because of the strict statutory requirements of the Food Stamp Program or the adverse impact of the proposed program changes on recipients’ benefits. FNS officials said they took great care in reviewing and approving the Arkansas plan because Arkansas’ simplified program and the modifications suggested by FNS officials would be instrumental in shaping federal policies for the Simplified Food Stamp Program nationwide. FNS approved Arkansas’ plan for pure-TANF households (all household members are eligible to receive TANF assistance) in February 1998. According to agency officials, FNS approved Arkansas’ simplified program plan for mixed-TANF households (some, but not all, household members are eligible to receive TANF assistance) in March 1998 after Arkansas modified the plan to reduce the amount of benefit loss for these households and Arkansas implemented it in August 1998. Arkansas officials cited three major changes to the simplified program that FNS either did not approve or only partially approved because the strict statutory requirements for the Food Stamp Program were difficult to align with TANF. These proposed rules and procedures would have greatly contributed to Arkansas’ goal of creating a “seamless” process to reduce application processing time. These changes were to (1) use an average dollar amount to determine the shelter costs under a simplified program, (2) change Arkansas’ application process so that the approval of both TANF and food stamp benefits could be given at the same time, and (3) change the definition of households to exclude children born while the mother was receiving TANF assistance. To determine whether a household’s net monthly income qualifies for the Food Stamp Program, a shelter expense deduction is included in the calculation. Under program regulations, a household is entitled to a deduction equal to its shelter costs (such as rent, mortgage payments, utility bills, property taxes, and insurance). Also, under the regular Food Stamp Program, a household without elderly or disabled members receives a deduction for the portion of shelter expenses exceeding 50 percent of net income, not to exceed the capped amount. Households containing elderly or disabled members are entitled to subtract the full value of their shelter costs when those costs exceed 50 percent of their adjusted income. In fiscal year 1996, the limit on the excess shelter expense deduction for a household without elderly or disabled members was generally $247. Arkansas’ proposed simplified program would have changed how shelter costs are considered in determining food stamp benefits. Specifically, Arkansas would have applied a county’s average shelter costs to both pure- and mixed-TANF households. Arkansas wanted to use the average shelter cost because, in keeping with the goal of creating a “seamless” process, each county could then be assigned a standard shelter cost. However, using each county’s average shelter costs in calculating simplified program benefits would have reduced the food stamp benefits, particularly for the mixed-TANF households, especially those containing elderly or disabled members, according to an FNS contractor’s analysis.While benefit amounts for pure-TANF households would increase an average of less than 1 percent, benefits for mixed-TANF households would decrease an average of more than 8 percent. Furthermore, approximately one-half of all the mixed-TANF households participating in the simplified program would lose benefits, and about one-third of the households would lose more than 20 percent. FNS said that such a level of reduction in benefits compromised the program’s nutritional support for a significant number of mixed-TANF households, most of which contain elderly or disabled members, who are particularly vulnerable to nutritional loss. FNS would not approve any policy changes that would have a negative effect on food stamp benefits. Consequently, FNS instructed Arkansas to modify its simplified program and consider changes in the calculation of shelter expenses on the basis of standardized and actual expenses for each county. Arkansas modified its simplified program to allow mixed-TANF household recipients to choose between the standard shelter costs and the household’s actual shelter costs. However, Arkansas also kept the average benefit calculation. FNS commented that the simplified program’s authorizing legislation requires FNS to approve implementation plans for pure-TANF households so long as these plans comply with the law. Using the simplified program authority, FNS established a single criterion for approving mixed-TANF households, and that criterion does not prohibit, but only limits, the amount that benefits can be reduced for these households. According to FNS, as long as a state’s plan for mixed-TANF households does not reduce benefits beyond specific thresholds and meets the statutory requirements, FNS will approve any state’s plan. FNS officials stated that they offered their services to work with Arkansas officials in developing several appropriate alternative ways to consider shelter costs in determining benefits in the simplified program. The use of actual expenses was only one of several suggestions made by FNS to limit the amount of benefit loss. Arkansas had proposed approving the applications for TANF and food stamp benefits at the same time. In effect, any household submitting an application to participate in TANF would be automatically submitting an application to participate in food stamps. FNS officials did not approve this program change because, under the Welfare Reform Act, the recipient has to be eligible for TANF assistance before being eligible for benefits under the simplified program. Consequently, Arkansas officials changed the simplified program rules to certify applicants’ eligibility for food stamp benefits after the TANF application is approved. If an applicant is not approved for TANF assistance within 30 days, the applicant is approved to receive benefits under the regular Food Stamp Program rules, if otherwise eligible. The TANF application form was revised to include information needed to determine eligibility for the regular Food Stamp Program. Once the applicant completed all the necessary paperwork and was subsequently approved for TANF assistance, the Arkansas caseworker would then convert the applicant’s regular food stamp eligibility to the simplified program. This application procedure causes the caseworker to take extra time to process an application for food stamp benefits under the regular Food Stamp Program rules. Moreover, moving applicants back and forth between the regular Food Stamp Program and the simplified program can cause confusion and give the caseworkers an additional workload burden. FNS commented that it approved the Arkansas plan for joint application filing, and the only restrictions on joint application processing pertain to situations in which the state is not able to approve the TANF application within 30 days. If TANF cannot be approved within 30 days, the food stamp application must be processed using regular Food Stamp Program procedures. This procedure ensures that food stamp benefits are not delayed while the state is making its determination regarding TANF. Arkansas had proposed changing the food stamp definition of households to exclude children born while the mother was receiving TANF assistance. FNS advised Arkansas that the definition of a household under the food stamp regulations could not be altered. Children ineligible for TANF assistance because of the family cap provision would still be included as household members for simplified program benefits. FNS advised Arkansas that it must use the regular food stamp regulations governing household composition in determining whether a household is eligible to participate in the simplified program. In addition, the households affected by the family cap provision must have their food stamp benefits determined under the regular Food Stamp Program. Arkansas revised its simplified program to allow households with children ineligible for TANF assistance solely because of the family cap provision to participate in the simplified program. The change in the definition of a household can cause that household to move between the simplified program and the regular Food Stamp Program. Once a household becomes ineligible for TANF assistance, the household’s eligibility for food stamps under the regular Food Stamp Program must be determined. FNS understood the administrative difficulties in switching households between the simplified program and the regular Food Stamp Program when TANF benefits are suspended for a short period of time. FNS notified Arkansas that it would be appropriate to allow a household to retain its simplified program status for a period of time—but no longer than 4 months. Households that are suspended from TANF for longer than 4 months must have their benefits redetermined under the regular Food Stamp Program. FNS commented that the simplified program plan Arkansas submitted did not request an alteration in household composition because of children subject to the family cap. It stated that these households would participate in the simplified program and would have their food stamp benefit amounts adjusted. Furthermore, FNS officials stated that because the Arkansas plan includes households with children subject to the family cap, benefits for these households are determined using the simplified program procedures. In spite of some of the difficulties, Arkansas still implemented a version of a simplified program. The Arkansas’ simplified program adopted TANF’s processing standards and rules to the extent possible, including the following: No medical costs are allowed under the simplified program because TANF recipients are covered by Medicaid. Under the regular Food Stamp Program, medical deductions of costs incurred over $35 are available to households that contain elderly and disabled members. No dependent care costs are allowed under the simplified program because the state pays child care costs for TANF recipients. Under the regular Food Stamp Program, households with dependents receive a deduction up to $200 for expenses involved in caring for children and other dependents while household members work, seek employment, or attend school. Countable income and resources will be determined by TANF rules, which limit resources to $3,000 for households. The regular Food Stamp Program permits up to $2,000 in countable assets for most households. Countable assets include cash; assets that can easily be converted to cash, such as money in checking or savings accounts, savings certificates, stocks or bonds; and lump-sum payments and nonliquid resources. Furthermore, TANF allows resource exemptions, such as the total value of one motor vehicle, while the Food Stamp Program exempts the fair market value up to $4,650. Households will be certified as eligible for assistance under the simplified program for up to 12 months, and benefits can be automatically extended. The regular Food Stamp Program certification period is also up to 12 months in Arkansas, but at the end of the period applicants must reapply for benefits, according to Arkansas officials. This certification period varies across the states and averages about 10 months. Arkansas invested a small amount of resources to develop its simplified program. The program serves approximately 10 percent of the state’s food stamp households. Arkansas officials stated that, although it is too early to determine whether the simplified program will meet its goals, some of the program’s potential benefits may include the following: Eligible TANF households will not have to apply for food stamps. Households receiving food stamps through the simplified program will have no change in reporting requirements, other than the TANF program requirements. County caseworkers will not be required to process a separate set of income and eligibility verification system reports generated through the Food Stamp Program. Households participating in the simplified program will not be subject to quarterly reporting. The state’s administrative costs will be cut because workers will no longer have to process two applications and report changes and will therefore be free to assist households in obtaining employment. Arkansas officials believe that they should be able to make their informal assessment of the simplified program in January 1999. After investing significant resources to simplify program administration, North Dakota officials, believing that the obstacles to implementation were insurmountable, decided not to implement the Simplified Food Stamp Program. North Dakota serves only about 16,000 households, or about two-tenths of a percent of the nation’s food stamp households. This case study describes North Dakota’s experience in attempting to implement a simplified program and become more administratively efficient in providing public assistance to its clients. In 1993, North Dakota officials, in anticipation of welfare reform, developed a conceptual plan for program simplification with the ultimate goal of getting people off welfare. North Dakota planned to develop a single cash benefit program based on family size that would replace assistance programs such as food stamps, Aid to Families with Dependent Children (AFDC), and Low Income Energy Assistance. Under this cash benefit program, eligibility determinations would be made with one set of rules for TANF, food stamps, and energy assistance, and benefits would be provided as a lump sum “cash out payment.” North Dakota also planned to develop a comprehensive system to meet essential training, education, and employment needs of persons receiving public assistance. The conceptual welfare reform plan resulted in a comprehensive program and an automated management system—referred to as Training, Education, Employment, and Management (TEEM)—which determines individuals’ eligibility for public assistance and the level of benefits they should receive. During 1996, the state planned to initiate a TEEM demonstration project. However, the Food Stamp Program was suspended from the demonstration project because FNS did not approve the policy changes requested by the state as a part of its Food Stamp Program and TANF merger. During this period, the Welfare Reform Act, containing the Simplified Food Stamp Program, was enacted. State officials believed that the simplified program provisions would allow their TEEM effort to obtain greater uniformity between the food stamps and TANF programs. Although the act disallowed a cash benefit for food stamps, North Dakota officials believed that the simplified program’s provisions would afford greater opportunities for implementing major policy changes, including the opportunity to provide a “single lump sum benefit.” In addition, the proposed TEEM demonstration project would have been limited to operating in only 10 North Dakota counties, whereas the simplified program would operate statewide. Ultimately, however, North Dakota decided not to implement a simplified program and is proceeding in its efforts to establish TEEM without the Food Stamp Program. North Dakota officials told us that they encountered a number of obstacles when working towards approval of a simplified program. Overall, they said the lack of federal regulations contributed to uncertainty regarding the type of program changes that could be achieved under the simplified program. The development of the program was a learn-as-you-go process, and although numerous concerns were resolved, new ones surfaced. State officials said that these roadblocks were a continuing source of frustration and created an environment of uncertainty and vagueness as to what simplified policy changes could be achieved under the simplified program legislation. Some of the major obstacles North Dakota faced are discussed below. According to North Dakota officials, caseworkers’ burden would increase as a result of implementing the simplified program. In addition to the TANF and regular Food Stamp Program, the simplified program creates an additional set of regulations and procedures—in essence creating a whole new program. This new program would comprise a mixture of TANF and Food Stamp Program regulations. Generally, caseworkers determine welfare benefits by applying multiple program regulations and procedures, including those for food stamps, TANF and Low Income Energy Assistance. According to state officials, with all the changes occurring in welfare reform in general, and TANF and TEEM in particular, the addition of a new set of program regulations and procedures would increased the complexity of the process to determine eligibility for welfare assistance. North Dakota caseworkers are located in small rural counties, and depending on the location, would have different degrees of experience and responsibilities. Some caseworkers manage all aspects of their county offices, some are also secretaries, and some are newly hired, long-term, and seasonal employees. Because many caseworkers do not administer these programs full-time and do not routinely determine eligibility for welfare assistance, North Dakota officials believe that these caseworkers are not as familiar with current welfare regulations as are full-time caseworkers. Thus, to add an additional set of program regulations and procedures would increase caseworkers’ burden. North Dakota, like other states, is required by FNS to conduct quality control reviews of its food stamp cases to identify and measure any erroneous food stamp issuances. Using the results of the review, the state determines an error rate that is the percentage of benefits either issued to ineligible households or issued in improper amounts (under- or over-payments) to eligible households. This error rate is reported to FNS. North Dakota officials stated that the addition of the simplified program would definitely increase the state’s food stamp error rates because of the additional burden it would place on the caseworkers. North Dakota’s error rate was 11.03 percent in fiscal year 1997, which exceeded the national average of 9.88 percent. This higher error rate resulted in a sanction of $38,978. North Dakota officials informed us that many of the mistakes were caused by seasonal workers. Other errors resulted from households’ fluctuating earned income, changes in welfare reform processes, and other local circumstances. Considering the state’s high error rate, and the fact that the simplified program would add an additional set of program regulations and procedures, North Dakota officials believed that implementation of the simplified program would contribute to caseworkers’ burden and could result in even a higher error rate. If the error rate increased by as much as 1 percent over the fiscal year 1997 rate, North Dakota would have incurred a corresponding increase in the sanction liability of $97,260, or an increase of 350 percent. According to North Dakota officials, the Simplified Food Stamp Program as outlined in the Welfare Reform Act is not simple. While the act’s provisions establishing the simplified program appear to allow the states the flexibility to make almost any type of program change, the Food Stamp Program is restrictive and some of its statutory provisions cannot be changed. In this regard, state officials told us that the Food Stamp Program differs from TANF and the Low Income Energy Assistance Program, which are block grants and therefore provide the state with flexibility in changing program operations. For example, state and FNS officials could not agree on the time period beneficiaries would have for notifying program officials of any significant changes in income earnings according to North Dakota officials. That time period varies between TANF and the Food Stamp Program—for TANF, it is 10 days, and for food stamps, it is up to 2 months. According to state officials, they could not agree with FNS officials because the time period allowed under the Food Stamp regulations could not be changed. FNS commented that it approved for the simplified plan the TANF requirement for reporting changes. According to FNS officials, since the Food Stamp Act places no restrictions on the states with respect to changes in reporting requirements, the states may use their TANF rules, food stamp rules, or a combination of the two. Two major reforms that North Dakota officials sought to initiate under the simplified program were not approved by FNS. FNS officials stated that these changes would alter the fundamental concepts of the Food Stamp Program. The two major policy reforms were (1) redefining household composition and (2) proposing a single benefit calculation for TANF, Food Stamps, and the Low Income Energy Assistance programs. According to FNS officials, approval of the changes for household composition and benefit calculations would alter the Food Stamp Program’s most fundamental features by eliminating the national nutrition safety net for low-income households. In addition, the approval of these two policy changes would reduce food stamp benefits for the elderly and disabled recipients who share living quarters with a TANF recipient. FNS officials stated that North Dakota’s proposal would replace the Food Stamp Program with essentially a state program, which the Congress elected not to do under welfare reform. When FNS did not approve this change, North Dakota abandoned its simplified food stamp proposal because the simplified program would not achieve the TEEM goals of providing a single lump sum benefit. FNS officials commented that North Dakota was aware even before submitting its simplified program proposal that FNS would not approve the major points of its plan, because these proposals had been previously denied in a demonstration project proposal for North Dakota’s TEEM effort. According to FNS officials, FNS advised North Dakota that the two proposals could not be approved under the simplified program because they violated the Food Stamp Act. However, North Dakota continued to seek these program changes as a part of its simplified program proposal. Although North Dakota’s simplified program proposal was cost neutral initially, state officials told us they feared that the program would not be cost neutral in future years. They were concerned about federal and/or state policy changes that could take place over time and that could have a negative impact on cost neutrality. Under such conditions, the state might no longer meet the cost-neutral requirement. Under the Welfare Reform Act, if FNS determines that a state’s program has increased federal costs for any year (or portion of a year), it must notify the state within 30 days. Within 90 days, the state must then submit, for FNS’ approval, a corrective action plan designed to prevent its simplified program from increasing federal food stamp costs. If the state does not submit or carry out a plan, its simplified program will be terminated, and according to the act, the state will be ineligible to operate a simplified program in the future. In the opinion of North Dakota officials, if the cost-neutrality provision was extended beyond 1 year, there would be a greater opportunity to achieve the goal. In April 1998, shortly before receiving FNS’ final approval, North Dakota officials decided to abandon the implementation of the simplified program. According to state officials, it became obvious that the program—revised from its original proposal in order to obtain FNS approval—would not meet the needs of North Dakota’s TEEM effort. FNS officials stated that these changes were necessary to meet the Food Stamp Act’ s requirements. State officials believed that its 5-year welfare reform effort, including the approximately 1-year effort to develop a simplified program, had wasted hundreds of hours of staff time. Since North Dakota did not achieve its goal, the expenditure of valuable personnel resources that could have been put to more productive use represent a great loss to the state, according to state officials. Change rules (i.e., remove restrictions; allow states to use same financial penalties across programs; base benefits on income only) Patricia A. Gleason, Assistant Director Peter M. Bramble, Jr., Evaluator-in-Charge Jacqueline A. Cook, Senior Evaluator Melissa M. Francis, Evaluator Carolyn M. Boyce, Senior Social Science Analyst Carol Herrnstadt Shulman, Senior Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the impact of welfare reform on the Food Stamp Program, focusing on: (1) the number of states that have adopted or are planning to adopt the Simplified Food Stamp Program; (2) the concerns that may be preventing other states from adopting the simplified program; and (3) the impacts that the adoption of the simplified programs may have on households' eligibility and benefits. GAO noted that: (1) GAO's July 1998 survey indicated that seven states had implemented a limited, or mini, Simplified Food Stamp Program; (2) of the 45 states that had not implemented the simplified program, 6 were planning to do so; 30 indicated that they did not plan to do so; and the 9 remaining states were uncertain about their plans; (3) one of the six states--Arkansas--was planning to adopt the simplified program, but subsequently implemented a full, or more comprehensive, program that establishes a uniform set of eligibility requirements for both food stamp and Temporary Assistance for Needy Families (TANF) assistance; (4) the states that had not implemented the simplified program cited several concerns that discouraged them: (a) increasing caseworkers' burden by creating a third set of eligibility criteria for a simplified program that are different from those associated with the separately administered TANF and the Food Stamp Program; (b) restricting the options for designing a simplified program by requiring it to be cost neutral; and (c) other Welfare Reform Act requirements that had a higher priority; (5) the simplified program would have little or no impact on either the number of households participating in the Food Stamp Program or on the amount of their benefits, according to the majority of states that have and have not implemented a simplified program; (6) the simplified program has limited impact, according to one state, primarily because a relatively small number of households participate in it compared with the state's total food stamp population; and (7) according to another state, since most TANF households also receive assistance under the regular Food Stamp Program, there is little change in total benefit costs as a result of the state's adoption of the simplified program.
According to the fiscal year 2013 President’s Budget, DOD accounts for about 57 percent of the discretionary federal budget authority. (See figure 1.) For fiscal year 2011, of the 24 agencies covered by the Chief Financial Officers Act of 1990 (CFO Act), DOD was the only agency to receive a disclaimer of opinion on all of its financial statements.Inspector General (IG) reported that the department’s fiscal year 2011 financial statements would not substantially conform to generally accepted accounting principles; DOD’s financial management and feeder systems were unable to adequately support material amounts on the financial statements; and long-standing material internal control weaknesses identified in prior audits continued to exist, including material weaknesses in areas such as financial management systems, Fund Balance with Treasury, Accounts Receivable, and General Property, Plant, and Equipment. In 2005, the DOD Comptroller first prepared the Financial Improvement and Audit Readiness (FIAR) Plan for improving the department’s business processes. The FIAR Plan is DOD’s strategic plan and management tool for guiding, monitoring, and reporting on the department’s financial management improvement efforts. As such, the plan communicates progress in addressing the department’s financial management weaknesses and achieving financial statement auditability. In accordance with the NDAA for Fiscal Year 2010, DOD provides reports to relevant congressional committees on the status of DOD’s implementation of the FIAR Plan twice a year—no later than May 15 and November 15. The NDAA for Fiscal Year 2010 also mandated that the FIAR Plan include the specific actions to be taken to correct the financial management deficiencies that impair the department’s ability to prepare timely, reliable, and complete financial management information. 2010, the DOD Comptroller issued the FIAR Guidance to implement the FIAR Plan. The FIAR Guidance provides a standardized methodology for DOD components to follow for achieving financial management improvements and auditability. The FIAR Guidance requires DOD components to identify and prioritize their business processes into assessable units, and then prepare a Financial Improvement Plan (FIP) for each assessable unit in accordance with the FIAR Guidance. Many of the procedures required by the FIAR Guidance are consistent with selected procedures for conducting a financial audit, such as testing internal controls and information system controls. In September 2010, we reported that the department needed to focus on implementing its FIAR Plan and that the key to successful implementation would be the efforts of the DOD military components and the quality of their individual FIPs. Pub. L. No.111-84, §1003(a)(2). components, such as the Army, Navy, and Air Force, are expected to develop and implement FIPs in accordance with the FIAR Guidance. The steps required for these plans include assessing processes, controls, and systems; identifying and correcting weaknesses; assessing, validating, and sustaining corrective actions; and ultimately achieving audit readiness. After a component’s management determines that an assessable unit is ready for audit, both the DOD Comptroller and the DOD Inspector General (IG) review the related FIP documentation to determine if they agree with management’s conclusion of audit readiness. DOD intends to progress toward achieving financial statement auditability by executing the FIAR Guidance methodology for groups of assessable units across four waves. Under the FIAR Plan, successful execution of the FIAR Guidance methodology for groups of assessable units across these waves is intended to result in the audit readiness of various components’ financial statements through fiscal year 2017. The first two waves of the FIAR Plan focus on achieving the DOD Comptroller’s interim budgetary priorities, which DOD believes should lead to an auditable SBR. The third wave focuses on accountability for DOD’s mission-critical assets, and the fourth wave focuses on the remaining assessable units constituting DOD’s complete set of financial statements. As mentioned earlier, the Secretary of Defense directed the department to achieve audit readiness for the SBR for General Fund activities by the end of fiscal year 2014. The NDAA for Fiscal Year 2012 reinforced this directive by requiring that the next FIAR Plan Status Report—to be issued in May 2012—include a plan, with interim objectives and milestones for each military department and the defense agencies, to support the goal of SBR audit readiness by 2014. The NDAA for Fiscal Year 2012 also requires the plan to include process and control improvements and business systems modernization efforts necessary for the department to consistently prepare timely, reliable, and complete financial management information. The SBR is the only financial statement predominantly derived from an entity’s budgetary accounts in accordance with budgetary accounting rules, which are incorporated into generally accepted accounting principles (GAAP) for the federal government. The SBR is designed to provide information on authorized budgeted spending authority reported in the Budget of the United States Government (President’s Budget), including budgetary resources, availability of budgetary resources, and how obligated resources have been used. In November 1990, DOD created the Defense Finance and Accounting Service (DFAS) as its accounting agency to consolidate, standardize, and integrate finance and accounting requirements, functions, procedures, operations, and systems. The military services continue to perform certain finance and accounting activities at each military installation. These activities vary by military service depending on what the services retained and the number of personnel they transferred to DFAS. As DOD’s accounting agency, DFAS is critical to DOD auditability as it records transactions in the accounting records, prepares thousands of reports used by managers throughout DOD and by the Congress, and prepares DOD-wide and service-specific financial statements. The military services play a vital role in that they authorize most of DOD’s expenditures and are the source of most of the financial information that DFAS uses to make payroll and contractor payments. The military services also have responsibility for most of DOD’s assets and the related information needed by DFAS to prepare annual financial statements required under the CFO Act. To support its operations, DOD performs an assortment of interrelated and interdependent business functions, such as logistics, procurement, health care, and financial management. As we have previously reported, the DOD systems environment that supports these business functions has been overly complex, decentralized, and error prone, characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks and storing the same data, and (3) the need for data to be entered manually into multiple systems. For fiscal year 2012, the department requested about $17.3 billion to operate, maintain, and modernize its business systems. DOD has reported that it relies on 2,258 business systems, including 335 financial management systems, 709 human resource management systems, 645 logistics systems, 243 real property and installation systems, and 281 weapon acquisition management systems. For decades, DOD has been challenged in modernizing its timeworn business systems. Since 1995, GAO has designated DOD’s business systems modernization program as high risk. In June 2011, we reported that the modernization program had spent hundreds of millions of dollars on an enterprise architecture and investment management structures that had limited value. As our research on public and private sector organizations has shown, two essential ingredients to a successful systems modernization program are an effective institutional approach to managing information technology (IT) investments and a well defined enterprise architecture. For its business systems modernization, DOD is developing and using a federated business enterprise architecture, which is a coherent family of parent and subsidiary architectures, to help modernize its nonintegrated and duplicative business operations and the systems that support them. Section 1072 of the NDAA for Fiscal Year 2010 requires that programs submitted for approval under DOD’s business system investment approach be assessed to determine whether or not appropriate business process reengineering efforts have been undertaken. The act further states that these efforts should ensure that the business process to be supported by the defense business system modernization will be as streamlined and efficient as practicable and the need to tailor commercial off-the-shelf systems to meet unique requirements or incorporate unique interfaces has been eliminated or reduced to the maximum extent practicable. GAO’s recent work highlights the types of challenges facing DOD as it strives to attain audit readiness and reengineer its business processes and systems. DOD leadership has committed DOD to the goal of auditable financial statements and has developed FIAR Guidance to provide specific instructions for DOD components to follow for achieving auditability incrementally. The department and its components also established interim milestones for achieving audit readiness for various parts (or assessable units) of the financial statements. These efforts are an important step forward. The urgency in addressing these challenges has been increased by the recent efforts to accelerate audit readiness time frames, in particular attaining audit readiness for the department’s SBR by fiscal year 2014. Our September 2011 report highlights the types of challenges DOD may continue to face as it strives to attain audit readiness, including instances in which DOD components prematurely asserted audit readiness and missed interim milestones. Also, DOD’s efforts over the past couple of years to achieve audit readiness for some significant SBR assessable units have not been successful. However, these experiences can serve to provide lessons for DOD and its components to consider in addressing the department’s auditability challenges. DOD’s ability to achieve departmentwide audit readiness is highly dependent on its military components’ ability to effectively develop and implement FIPs in compliance with DOD’s FIAR Guidance. However, in our September 2011 report, we identified several instances in which the components did not prepare FIPs that fully complied with the FIAR Guidance, resulting in premature assertions of audit readiness. Specifically, as we reported in September 2011, the FIAR Guidance provides a reasonable methodology for the DOD components to follow in developing and implementing their FIPs.responsibilities of the DOD components, and prescribes a standard, systematic approach that components should follow to assess processes, controls, and systems, and identify and correct weaknesses in order to achieve auditability. When DOD components determine that sufficient financial improvement efforts have been completed for an assessable unit in accordance with the FIAR Guidance and that the assessable unit is ready for audit, the FIP documentation is used to support the conclusion of audit readiness. Thus, complying with the FIAR Guidance can provide a consistent, systematic means for DOD components to achieve and verify audit readiness incrementally. It details the roles and We found that when DOD components did not prepare FIPs that fully complied with the FIAR Guidance, they made assertions of audit readiness prematurely and did not achieve interim milestones. While the components initially appeared to meet some milestones by asserting audit readiness in a timely manner, reviews of supporting documentation for the FIPs of two assessable units and attempts to audit the Marine Corps’ SBR revealed that the milestones had not been met because the assessable units were not actually ready for audit. For example, the Navy asserted audit readiness for its civilian pay in March 2010 and the Air Force asserted audit readiness for its military equipment in December 2010. However, we reported that neither component had adequately developed and implemented their FIPs for these assessable units in accordance with the FIAR Guidance and were therefore not ready for audit. The Marine Corps first asserted financial audit readiness for its General Fund SBR on September 15, 2008. The DOD IG reviewed the Marine Corps’ assertion package and on April 10, 2009, reported that the assertion of audit readiness was not accurate, and that its documentation supporting the assertion was not complete. GAO has made prior recommendations to address these issues. DOD has generally agreed with these recommendations and is taking corrective actions in response. The Secretary of Defense’s direction to achieve audit readiness for the SBR by the end of 2014 necessitated that DOD’s components revise some of their plans and put more focus on short-term efforts to develop accurate data for the SBR in order to achieve this new accelerated goal. In August 2011, DOD’s military components achieved one milestone toward SBR auditability when they all received validation by an independent public accounting firm that their Appropriations Receipt and Distribution—a section of the SBR—was ready for audit. In addition, the November 2011 FIAR Plan Status Report indicated that the Air Force achieved audit readiness for its Fund Balance with Treasury (FBWT). Further, in a February 2012 briefing on its accelerated plans, DOD indicated that 7 of 24 material general fund Defense Agencies and Other Defense Organizations are either already sustaining SBR audits or are ready to have their SBRs audited. These accomplishments represent important positive steps. Nevertheless, achieving audit readiness for the military components’ SBRs is likely to pose significant challenges based on the long-standing financial management weaknesses and audit issues affecting key SBR assessable units. Our recent reports highlight some of the difficulties that the components have experienced recently related to achieving an auditable SBR, including the Army’s inability to locate and provide supporting documentation for its military pay; the Navy’s and the Marine Corps’ inability to reconcile their Fund Balance with Treasury accounts; and the Marine Corps’ inability to provide sufficient documentation to auditors of its SBR. To achieve SBR audit readiness by 2014, DOD and its components need accelerated, yet feasible, well-developed plans for identifying and correcting weaknesses in the myriad processes involved in producing the data needed for the SBR. While DOD has developed an accelerated FIAR Plan to provide an overall view of the department’s approach for meeting the 2014 goal, most of the work must be carried out at the component level. The Army’s active duty military payroll, reported at $46.1 billion for fiscal year 2010, made up about 20 percent of its reported net outlays for that year. As such, it is significant to both Army and DOD efforts for achieving auditability for the SBR. For years, we and others have reported continuing deficiencies in the Army’s military payroll processes and controls. Moreover, other military components such as the Air Force and the Navy share some of these same military payroll deficiencies. In March 2012, we reported that the Army could not readily identify a DOD’s complete population of its payroll accounts for fiscal year 2010. FIAR Guidance states that identifying the population of transactions is a key task essential to achieving audit readiness. However, the Army and DFAS-Indianapolis (DFAS-IN), which is responsible for accounting, disbursing, and reporting for the Army’s military personnel costs, did not have an effective, repeatable process for identifying the population of active duty payroll records. For example, it took 3 months and repeated attempts before DFAS-IN could provide a population of service members who received active duty Army military pay in fiscal year 2010. Further, because the Army does not have an integrated military personnel and payroll system, it was necessary to compare the payroll file to active Army personnel records. However, DOD’s central repository for information on DOD-affiliated personnel did not have an effective process for comparing military pay account files with military personnel files to identify a valid population of military payroll transactions. In addition, the Army and DFAS-IN were unable to provide documentation to support the validity and accuracy of a sample of fiscal year 2010 payroll transactions we selected for review. For example, DFAS-IN had difficulty retrieving and providing usable Leave and Earnings Statement files and the Army was unable to locate or provide supporting personnel documents for a statistical sample of fiscal year 2010 Army military pay accounts. At the end of September 2011, 6 months after we had provided them with our sample of 250 items, the Army and DFAS-IN were able to provide complete documentation for only 2 of the sample items and provided only partial documentation for another 3 items; they were unable to provide any documentation for the remaining 245 sample items. At of the time of our report, the Army had several military pay audit readiness efforts planned or under way. Timely and effective implementation of these efforts could help reduce the risk of DOD not achieving the SBR audit readiness goal of 2014. However, most of these actions are in the early planning stages. Moreover, these initiatives, while important, do not address (1) establishing effective processes and systems for identifying a valid population of military payroll records, (2) ensuring Leave and Earnings Statement files and supporting personnel documents are readily available for verifying the accuracy of payroll records, (3) ensuring key personnel and other pay-related documents that support military payroll transactions are centrally located, retained in service member Official Military Personnel Files, or are otherwise readily accessible, and (4) requiring the Army’s Human Resources Command to periodically review and confirm that service members’ Official Military Personnel File records are consistent and complete to support annual financial audit requirements. GAO has made prior recommendations to address these issues. DOD has agreed with these recommendations and is taking corrective actions in response. A successful audit of the SBR is dependent on the ability to reconcile an agency’s Fund Balance with Treasury (FBWT) with the Treasury records. FBWT is an account that reflects an agency’s available budget spending authority by tracking its collections and disbursements. Reconciling a FBWT account with Treasury records is a process similar in concept to reconciling a check book with a bank statement. In December 2011, we reported that neither the Navy nor the Marine Corps had implemented effective processes for reconciling their FBWT. The Navy and the Marine Corps rely on the DFAS location in Cleveland (DFAS-CL) to perform their FBWT reconciliations. We found numerous deficiencies in DFAS processes that impair the Navy’s and the Marine Corps’ ability to effectively reconcile their FBWT with Treasury records, including the following. There are significant data reliability issues with the Defense Cash Accountability System (DCAS), which records daily collections and disbursements activity. The Navy and Marine Corps rely on DCAS to reconcile their FBWT to Treasury records. DFAS-CL did not maintain adequate documentation for the sample items we tested to enable an independent evaluation of its efforts to research and resolve differences. DFAS-CL recorded unsupported entries (plugs) to force Navy and Marine Corps appropriation balances to agree with those reported by Treasury instead of investigating and resolving differences between these two services’ appropriation balances and those maintained by Treasury. Navy, Marine Corps, and DFAS-CL officials acknowledged that existing FBWT policies and procedures were inadequate. Navy and DFAS-CL officials stated that the base realignment and closure changes from 2006 through 2008 resulted in loss of experienced DFAS-CL personnel and that remaining staff have not received the needed training. In response to our recommendations, the Navy developed a plan of action and milestones (POAM) intended to address the Navy’s audit readiness weaknesses, including FBWT required reconciliations. The Marine Corps received disclaimers of opinion from its auditors on its fiscal year 2010 and 2011 SBRs because it could not provide supporting documentation in a timely manner, and support for transactions was missing or incomplete. Further, the Marine Corps had not resolved significant accounting and information technology (IT) system weaknesses identified in the fiscal year 2010 SBR audit effort. The auditors also reported that the Marine Corps did not have adequate processes and controls, including systems controls, for accounting and reporting on the use of budgetary resources. Further, the Marine Corps could not provide evidence that reconciliations for key accounts (such as FBWT) and processes were being performed on a monthly basis. The auditors also identified ineffective controls in key IT systems used by the Marine Corps to process financial data. During fiscal year 2011, however, the Marine Corps was able to demonstrate progress toward auditability. For example, its auditors confirmed that as of October 2011, the Marine Corps had fully implemented 32 out of 139 fiscal year 2010 audit recommendations. The results of the audit for fiscal year 2010 provided valuable lessons on preparing for a first-time financial statement audit. In our September 2011 report, we identified five fundamental lessons that are critical to success. Specifically, the Marine Corps’ experience demonstrated that prior to asserting financial statement audit readiness, DOD components must (1) confirm completeness of populations of transactions and address any abnormal transactions and balances, (2) test beginning balances, (3) perform key reconciliations, (4) provide timely and complete responses to audit documentation requests, and (5) verify that key IT systems are compliant with the Federal Financial Management Improvement Act of 1996 and are auditable. GAO has made prior recommendations to address these issues. DOD has generally agreed with these recommendations and is taking corrective actions in response. These issues are addressed in GAO’s Standards for Internal Control in the Federal Government and DOD’s FIAR Guidance. During our audit, Navy, Army, and Air Force FIP officials stated that they were aware of the Marine Corps lessons and were planning to, or had, incorporated them to varying degrees into their audit readiness plans. In its November 2011 FIAR Plan, DOD provided an overall view of its accelerated FIAR Plan for achieving audit readiness of its SBR by the end of fiscal year 2014. In its February 2012 briefing, DOD recognized key factors that are needed to achieve auditability such as the consistent involvement of senior leadership as well as the buy-in of field commanders who ultimately must implement many of the changes needed. The plan also provided interim milestones for DOD components such as the Army, Navy, Air Force, Defense Logistics Agency, and other defense agencies. Acceleration substantially compresses the time allotted for achieving some of these milestones. For example, the May 2011 FIAR Plan Status Report indicated that the Air Force had planned to validate its audit readiness for many SBR-related assessable units in fiscal year 2016 and that its full SBR would not be ready for audit until 2017. However, the February 2012 briefing on the accelerated plans indicated that most of the Air Force’s SBR assessable units will be audit-ready in fiscal years 2013 or 2014. These revised dates reflect the need to meet the expedited audit readiness goal of 2014. (See figure 2.) As discussed earlier, the key to audit readiness is for DOD components to effectively develop and implement FIPs for SBR assessable units, and to meet interim milestones as they work toward the 2014 goal. According to Navy officials, the Navy plans to prepare a FIP for each of several assessable units that make up the SBR. For example, for its SBR, Navy officials told us they have identified assessable units for appropriations received, and for various types of expenditures for which funds are first obligated and then disbursed, such as military pay, civilian pay, contracts, and transportation of people. The Air Force will prepare FIPs for assessable units similar to those of the Navy. Army officials told us they are taking a different approach from the Navy. They said that instead of developing FIPs for discrete assessable units constituting the SBR, they are preparing only one FIP for one audit readiness date for the Army’s entire SBR, an approach similar to that of the Marine Corps. For years, DOD has been developing and implementing enterprise resource planning (ERP) systems, which are intended to be the backbone to improved financial management. DOD considers the successful implementation of these ERP systems critical to transforming its business operations and addressing long-standing weaknesses in areas such as financial and supply-chain management and business systems modernization. DOD officials have also stated that these systems are critical to ensuring the department meets its mandated September 30, 2017, goal to have auditable departmentwide financial statements. However, as we recently reported, six of these ERP systems are not scheduled to be fully deployed until either fiscal year 2017 or the end of fiscal year 2016. The DOD IG reported that the Navy developed and approved deployment of the Navy ERP System without ensuring that the system complied with DOD’s Standard Financial Information Structure (SFIS) and the U.S. Government Standard General Ledger. The DOD IG further stated that as a result, the Navy ERP System, which is expected to manage 54 percent of the Navy’s obligation authority when fully deployed, might not produce accurate and reliable financial information. DOD Inspector General, Navy Enterprise Resource Planning System Does Not Comply With the Standard Financial Information Structure and U.S. Government Standard General Ledger, DODIG-2012-051 (Arlington, Va.: Feb. 13, 2012). Two ERP systems—the Army’s General Fund Enterprise Business System (GFEBS) and the Air Force’s Defense Enterprise Accounting and Management System (DEAMS)—are general ledger systems intended to support a wide range of financial management and accounting functions. However, DFAS users of these systems told us that they were having difficulties using the systems to perform their daily operations. Problems identified by DFAS users included interoperability deficiencies between legacy systems and the new ERP systems, lack of query and ad hoc reporting capabilities, and reduced visibility for tracing transactions to resolve accounting differences. For example: Approximately two-thirds of invoice and receipt data must be manually entered into GFEBS from the invoicing and receiving system due to interface problems. Army officials explained that the primary cause of the problem was that the interface specification that GFEBS is required by DOD to use did not provide the same level of functionality as the interface specification used by the legacy systems. At the time of our review, Army officials stated that they are working with DOD to resolve the problem, but no time frame for resolution had been established. DEAMS did not provide the capability—which existed in the legacy systems—to produce ad hoc query reports that could be used to perform data analysis needed for day-to-day operations. DFAS officials noted that when DEAMS did produce requested reports, the accuracy of those reports was questionable. According to DFAS officials, they are currently working with DEAMS financial management to design the type of reports that DFAS needs. While we were told that as of February 2012, the Army and the Air Force had corrective actions under way to address identified deficiencies, specific timelines had not been developed so that progress could be monitored. In February 2012, we reported that independent assessments of four ERPs—the Army’s GFEBS and Global Combat Support System (GCSS- Army), and the Air Force’s DEAMS and Expeditionary Combat Support System (ECSS)—identified operational problems, such as deficiencies in data accuracy, inability to generate auditable financial reports, the need for manual workarounds, and training. DOD oversight authority limited the deployment of GFEBS and DEAMS on the basis of the results of the independent assessments. However, in June 2011, DOD authorized continued deployment of GFEBS and delegated further GFEBS deployment decisions to the Under Secretary of the Army. In addition to functional issues, we found that training was inadequate. According to DFAS personnel as of February 2012, the training they received for GFEBS and DEAMS did not fully meet their needs. DFAS personnel informed us that the training focused on an overview of GFEBS and DEAMS and how the systems were supposed to operate. While this was beneficial in identifying how GFEBS and DEAMS were different from the existing legacy systems, the training focused too much on concepts rather than the skills needed for DFAS users to perform their day-to-day operations. GAO has made prior recommendations to address these issues. DOD has generally agreed with these recommendations and is taking corrective actions in response. Improving the department’s business environment through efforts such as DOD’s business enterprise architecture and improved business systems management is an important part of helping DOD achieve auditability. In June 2011, we reported that DOD had continued to make progress in implementing a comprehensive business enterprise architecture, transition plan, and improved investment control processes. However, we also reported that long-standing challenges had yet to be addressed. Specifically, we reported that while DOD continued to release updates to its corporate enterprise architecture, the architecture had yet to be augmented by a coherent family of related subsidiary architectures.example, we reported that while each of the military departments had developed aspects of a business architecture and transition plan, none of them had fully developed a well-defined business enterprise architecture For and transition plan to guide and constrain business transformation initiatives. We also reported in June 2011 that DOD continued to improve its business system investment management processes, but that much remained to be accomplished to align these processes with investment management practices associated with individual projects and with portfolios of projects. military departments all had documented policies and procedures for identifying and collecting information about IT projects and systems to support their business system investment management processes. However, neither DOD nor the military departments had fully documented policies and procedures for selecting a new investment, reselecting ongoing investments, integrating funding with investment selection, or management oversight of IT projects and systems. With regard to portfolios of projects, DOD and the Departments of the Air Force and Navy had assigned responsibility for managing the development and modification of IT portfolio selection criteria. However, neither DOD nor the military departments had fully documented policies and procedures for creating and modifying IT portfolio selection criteria; analyzing, selecting, and maintaining their investment portfolios; reviewing, evaluating, and improving the performance of their portfolios; or conducting post implementation reviews. In addition, while DOD largely followed its certification and oversight processes, we reported that key steps were not performed. For example, as part of the certification process, DOD assessed investment alignment with the business enterprise architecture, but did not validate the results of this assessment, thus increasing the risk that decisions regarding certification would be based on inaccurate and unreliable information. These best practices are identified in GAO IT investment management guidance. See GAO, Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity, GAO-04-394G (Washington, D.C.: Mar. 2004). management.and is taking corrective actions in response. It is essential that DOD implement our recommendations aimed at addressing these long- standing challenges, as doing so is critical to the department’s ability to establish the full range of institutional management controls needed for its financial management as well as its overall business systems modernization high-risk program. We have ongoing work to evaluate the department’s efforts to comply with the NDAA for Fiscal Year 2012, as amended, including updating our evaluations of DOD’s comprehensive business enterprise architecture and transition plan and improved investment control processes. Section 1072 of the NDAA for Fiscal Year 2010 requires that new DOD programs be assessed to determine whether or not appropriate business- process reengineering efforts have been undertaken. The act further states that these efforts should ensure that (1) the business process to be supported by the defense business system modernization will be as streamlined and efficient as practicable and (2) the need to tailor commercial-off-the-shelf systems to meet unique requirements or incorporate unique interfaces has been eliminated or reduced to the maximum extent practicable.investments we reviewed, DOD and the military departments used DOD’s Business Process Reengineering Guidance (dated April 2011) to assess In June 2011, we reported that, for those whether the investments complied with the business-process reengineering requirement. Consistent with the guidance, DOD and the military departments completed questionnaires to help them identify and develop approaches to streamlining and improving existing business processes. Once these assessments had been completed, the appropriate authorities asserted that business-process reengineering assessments had been performed. We also reported in June 2011 that while DOD and the military departments largely followed DOD’s guidance, they did not perform the key step of validating the results of these reengineering assessments to ensure that they, among other things, accurately assessed process weaknesses and identified opportunities to streamline and improve affected processes. The reason DOD did not follow key aspects of the certification process—primarily not validating assessment results—was attributed in part to unclear roles and responsibilities. According to military department officials responsible for the investments we reviewed, validation activities did not occur because DOD policy and guidance did not explicitly require them to be performed. In addition, there was no guidance that specified how assessments should be validated. According to DOD officials, the oversight and designated approval authorities did not validate the DOD level assessments and assertions because DOD policy and guidance had not yet been revised to require these authorities to do so. We have work underway to evaluate DOD’s efforts to improve its business system investment process, including its efforts to address the act’s business process reengineering requirement. GAO has made prior recommendations to address these issues. DOD has agreed with these recommendations and is taking corrective actions in response. In closing, DOD has demonstrated leadership and sustained commitment since the first issuance of its FIAR Plan in 2005 and through improvements and responsiveness to our recommendations since then. DOD has made progress through the FIAR Guidance, with the development of a methodology for implementing the FIAR strategy. Full compliance with the guidance can provide a consistent, systematic process to help DOD components achieve and verify audit readiness. Without full compliance, as we have seen in our work, components may assert audit readiness while process deficiencies prevent validation, require corrective actions, and delay an audit for another fiscal year. Automated information systems are essential for modern accounting and recordkeeping. DOD is developing its ERP systems as the backbone of its financial management improvement and they are critical for transforming its business operations. To be fully successful, implementation of ERP systems should be consistent with an effective corporate enterprise architecture and the development of streamlined business processes. DOD officials have stated that these systems are critical to ensuring that the department meets its mandated September 30, 2017, goal to have auditable departmentwide financial statements. However, implementation has been delayed by deficiencies in performance and the need for remedial corrective actions. DOD components will evaluate cost-effective modifications to legacy systems and implement any necessary changes. According to DOD officials, for the ERP systems that will not be fully deployed prior to the audit readiness goals, the DOD components will need to identify effective workaround processes or modifications to legacy systems that will enable audit readiness. DOD faces considerable implementation challenges and has much work to do if it is to meet the goals of an auditable SBR by fiscal year 2014 and a complete set of auditable financial statements by fiscal year 2017. It is critical that DOD continue to build on its current initiatives. Oversight and monitoring will also play a key role in making sure that DOD’s plans are implemented as intended and that lessons learned are identified and effectively disseminated and addressed. Absent continued momentum and necessary future investments, the current initiatives may falter, similar to previous well-intended, but ultimately failed, efforts. We will continue to monitor the progress and provide feedback on the status of DOD’s financial management improvement efforts. We currently have work in progress to assess (1) the FIAR Plan’s risk management process for identifying, assessing, and addressing risks that may impede DOD’s ability to achieve the 2017 financial audit readiness goal; (2) DOD’s funds control in relation to the reliability of its financial statements; (3) the schedule and cost of Army’s GCSS; (4) components’ efforts to prepare for SBR and full financial statement audits; and (5) DOD’s actions in response to our recommendations. As a final point, I want to emphasize the value of sustained congressional interest in the department’s financial management improvement efforts, as demonstrated by this subcommittee’s leadership. Chairman McCaskill, Ranking Member Ayotte, and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony include Valerie Melvin, Director; Cindy Brown Barnes, Assistant Director; Mark Bird, Assistant Director; Kristi Karls; Michael Holland, Chris Yfantis, and Maxine Hattery. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the years, the Department of Defense (DOD) has initiated several efforts intended to improve its financial management operations and ultimately achieve an unqualified (clean) opinion on its financial statements. These efforts have fallen short of sustained improvement in financial management and financial statement auditability. In this statement, GAO provides its assessment of DOD’s progress toward: (1) producing an auditable Statement of Budgetary Resources (SBR) by fiscal year 2014 and a complete set of auditable financial statements by fiscal year 2017, including the development of interim milestones for both aforementioned audit readiness goals; (2) acquiring and implementing new enterprise resource programs and other critical financial management systems; (3) reengineering business processes and instituting needed controls; and (4) implementing a comprehensive business enterprise architecture and transition plan, and improved investment control processes. This statement is primarily based on GAO’s prior work related to the department’s efforts to achieve audit readiness, implement modern business systems, and reengineer its business processes. GAO also obtained and compared key milestones in a February 2012 DOD briefing on its updated plans to accelerate achieving SBR auditability with the May 2011 Financial Improvement and Audit Readiness plan but did not independently verify the updated information in the February 2012 briefing. GAO’s recent work highlights the types of challenges facing the Department of Defense (DOD) as it strives to attain audit readiness and reengineer its business processes and systems. The urgency in addressing these challenges has been increased by the goals of an auditable DOD Statement of Budgetary Resources (SBR) by the end of fiscal year 2014 and a complete set of auditable financial statements by the end of fiscal year 2017. For example, GAO’s 2011 reporting highlights difficulties the DOD components experienced in attempting to achieve an auditable SBR. These include: tthe Navy’s and the Air Force’s premature assertions of audit readiness and missed interim milestones; the Army’s inability to locate and provide supporting documentation for its military pay; the Navy’s and Marine Corps’ inability to reconcile their Fund Balance with Treasury (FBWT) accounts; and the Marine Corps’ inability to receive an opinion on both its fiscal years 2010 and 2011 SBRs because it could not provide supporting documentation in a timely manner, and support for transactions was missing or incomplete. In a February 2012 briefing on its updated plans, DOD accelerated milestones for its components —in some cases, significantly—to accomplish the 2014 SBR goal. For example, the Air Force had planned to validate its audit readiness for many SBR-related items in fiscal year 2016; however, the department’s February 2012 accelerated plans show that most of the Air Force’s SBR line items will be audit-ready in fiscal years 2013 or 2014. Also, in its February 2012 update DOD shows that 7 of 24 material general fund Defense Agencies and Other Defense Organizations have either already had SBR audits or are ready to have their SBRs audited, which represent important positive steps. DOD has stated it considers the successful implementation of its enterprise resource planning (ERP) systems critical to transforming its business operations, addressing long-standing weaknesses, and ensuring the department meets its mandated September 30, 2017 auditability goals. However, in 2011, GAO reported that independent assessments of two of these systems—the Army’s and Air Force’s new general ledger systems—identified operational problems, gaps in capabilities that required manual workarounds, and training that was not focused on system operation. Moreover, users of these systems had difficulties using these systems to perform daily operations. GAO also reported in 2011 on numerous weaknesses in DOD’s enterprise architecture and business processes that affect DOD’s auditability. For example, while DOD continued to update its corporate enterprise architecture, it had not yet augmented its corporate architecture with complete, coherent subsidiary architectures for DOD components such as the military departments. Also, while DOD and the military departments largely followed DOD’s Business Process Reengineering Guidance to assess business system investments, they had not yet performed the key step of validating assessment results. GAO has made prior recommendations to address these issues. DOD has generally agreed with these recommendations and is taking corrective actions in response. GAO has work underway to evaluate DOD’s continuing efforts in these areas.
A key goal of universal service is to ensure affordable telecommunications services to consumers living in high-cost areas, low-income consumers, eligible schools and libraries, and rural health care providers. Universal service programs are funded by statutorily mandated payments into the Universal Service Fund by companies that provide interstate and international telecommunications services. These payments are deposited into the federal Universal Service Fund, from which disbursements are made for the various federal universal service programs, including the Rural Health Care Program. Companies generally pass their universal service costs along to consumers through a universal service fee on customers’ telephone bills. FCC’s current Rural Health Care Program is made up of three components that fund different benefits. As figure 1 illustrates, the first two components—the Telecommunications Fund and the Internet Access Fund—are commonly discussed together as the “primary Rural Health Care Program.” Both components in the primary Rural Health Care Program offer discounts on services provided to a single site. In contrast, the third component—the pilot program—encourages health care providers to form comprehensive, multisite, state and regional dedicated health care networks. Figure 2 shows how the components in the current Rural Health Care Program may change if FCC adopts the proposed reforms described in its July 2010 NPRM. As the figure illustrates, the Health Broadband Services Program would replace the Internet Access Fund (and raise the discount percentage). A new Health Infrastructure Program would make available up to $100 million per year to support up to 85 percent of the construction costs of new regional or statewide networks for health care providers in areas of the country where broadband is unavailable or insufficient. This $100 million would be part of the overall $400 million annual spending cap that covers the Rural Health Care Program as a whole and that FCC established in 1997. In managing the program, FCC oversees USAC—the not-for-profit corporation that administers the program. USAC uses its subcontractor, Solix, Inc., to carry out certain key aspects of the program, such as reviewing and processing funding applications. An MOU between FCC and USAC as well as FCC orders and rules set forth the roles and responsibilities of FCC and USAC in the management, oversight, and administration of the Rural Health Care Program. (See the sidebar on this page for examples of benefits provided by the Rural Health Care Program.) USAC has highlighted the impct of the Rl Helth Cre Progrm on the Kodik Are Ntive Assocition (KANA). KANA i nonprofit corportion tht provide helth nd ociervice for the Alas Ntive of the Konig region. According to the Informtion Stem Mger of KANA, the support received from the primry Rl Helth Cre Progrm has “revoltionized telehelth ervice” t KANA. KANA ptient once hd to wit etween 6 nd 9 month to ee n epecilit, but telemedicine has redced ptient wit time 2 week nd has redced trvel co, ince mny ptient viite condcted remotely y other helth cre provider. For exmple, phyicin in Anchorge was able to ass helth ide in Kotzebue perform surgery when evere wether mde ir trvel impossle. The Informtion Stem Mger reported tht withosupport from the primry Rl Helth Cre Progrm, KANA wold e forced to go back to using dil-ervice. Withot thi support, “it wold e difficlt to fford even the llet connection etween the villge nd KANA.” satellite service charges, so that rural and urban prices are comparable within each state. Second, to support advanced telecommunications and information services, the Internet Access Fund offers most rural health care providers a 25 percent flat discount on monthly Internet access charges. Eligible rural health care providers can apply for support from both the Telecommunications Fund and the Internet Access Fund. However, FCC has stated that rural health care providers have not participated at the rate it had expected. The steps that applicants must carry out to obtain support from one or both components of the primary Rural Health Care Program are illustrated in figure 3. FCC created the third component of the current Rural Health Care Program, the pilot program, in September 2006 after acknowledging that the primary Rural Health Care Program was “greatly underutilized.” FCC explained that “although there are a number of factors that may explain the underutilization” of the program, it was “apparent that health care providers continue to lack access to the broadband facilities needed to support…advanced telehealth applications.” The pilot program funds 85 percent of the costs of deploying dedicated broadband networks connecting rural and urban health care providers, including the cost of designing and installing broadband networks that connect health care providers in a state or region, as well as the costs of advanced telecommunications and information services that ride over that network. This is in contrast to the primary Rural Health Care Program, which provides discounts only on monthly recurring costs for telecommunications services or Internet access to rural health care providers. The pilot program also provides funding for the cost of connecting state or regional networks to Internet2 or National LambdaRail—two national networks that connect government research institutions as well as academic, public, and private health care institutions—and the costs of connecting to the public Internet. Any eligible public and nonprofit health care provider—whether located in an urban or a rural area—was eligible to apply for funding when the pilot program was announced. However, the program rules required that applicants’ proposed networks include at least a de minimis number of public and nonprofit health care providers that serve rural areas. FCC received 81 applications from projects seeking to participate in the pilot program. In November 2007, FCC announced that it had accepted 69 of the 81 projects into the program and capped total funding for all of the pilot projects at roughly $418 million over 3 years. Since then, a few projects have merged and 1 project has withdrawn from the program. The size and scope of the remaining projects vary widely. For example, the Illinois Rural HealthNet project is using pilot program funds to pay for the installation of approximately 1,250 miles of buried fiber. The purpose of this fiber is to create the backbone of a network that will connect rural critical access hospitals, health clinics, and community mental health centers to specialists throughout the state and nation. In contrast, a project in Wisconsin is planning to use the pilot program funds to link two existing fiber systems to establish connections between four hospitals that allow health care specialists to transmit images between facilities. (See the sidebar on this page for other examples of pilot projects.) USAC administers the pilot program pursuant to FCC’s rules. Each pilot project must designate a project coordinator and associate project coordinator, who manage the administrative aspects of the program for the project and submit the required forms. USAC provides each project with a “coach”—that is, a designated Solix staff person who works closely with a pilot project to assist the project through the program’s administrative requirements and processes. With some exceptions, the pilot program forms and administrative processes are the same as those previously described in the primary Rural Health Care Program. However, pilot participants pay 15 percent of eligible costs (and all ineligible costs), and the pilot program funds up to 85 percent of eligible costs. In addition, pilot participants must meet additional requirements before they can receive funding: The lead entity in charge of the pilot project must obtain a letter of agency from every entity participating in its project. This letter authorizes the lead entity to act on the other entity’s behalf in all matters related to the pilot program. Pilot participants must develop a sustainability plan describing how the project will be self-sustaining in the future, to include network ownership and membership arrangements, and describing sources of future support. Pilot participants are required to submit quarterly progress reports describing the status of their project. In February 2010, FCC’s Wireline Competition Bureau extended by 1 year, to June 30, 2011, the deadline for participants in the pilot program to submit to USAC requests for funding commitments. Annual disbursements from the primary Rural Health Care Program have increased from 1998 through 2009, yet they have never approached FCC’s original projections for participation. Figure 4 shows the total amount of funds that have been disbursed for the primary Rural Health Care Program from 1998 to 2009. USAC disbursed just over $327 million for the primary program from 1998 through 2009. Thus, as figure 4 illustrates, total program expenditures in 12 years of disbursements have not yet reached the single year funding cap of $400 million. Also, as of September 2010, USAC has disbursed just over $26 million for the pilot program. Therefore, USAC has disbursed less than $400 million for all three components of the Rural Health Care Program since the program began in 1998. (FCC does not collect $400 million each year from telecommunications carriers for this program, but rather bases collections only on projected expenditures. FCC uses a quarterly evaluation of health care provider demand to assess how much telecommunications companies must contribute to the Universal Service Fund each quarter. This means that if FCC’s proposed reforms create more participation in the program, telecommunications companies would need to pay more in Universal Service Fund contributions. Telecommunications companies would likely pass these costs on to consumers through higher universal service fees in consumers’ telephone bills.) According to USAC data, primary Rural Health Care Program funding was disbursed to all of the types of rural health care providers designated by statute as eligible to participate in the program. As figure 5 illustrates, over 68 percent of total applicants in 2008 were either rural health clinics or not-for-profit hospitals. As with disbursements, the number of applicants to the primary Rural Health Care Program has generally increased since the program began. Figure 6 shows the number of rural health care providers that have applied to the primary Rural Health Care Program have increased from a low of 1,283 applicants in 1999 to 4,014 in 2009. Similarly, the number of funding commitments issued to participants in the primary Rural Health Care Program has exhibited a slow, steady increase over time from 799 funding commitments in 1998 to 6,790 in 2008. Figure 7 shows the number of funding commitments by the type of service requested (e.g., telecommunications services or Internet access services). Funding commitments have varied considerably among applicants within the states and territories, with almost 55 percent of the funding going to applicants in Alaska. Disbursements range from over $178 million for Alaska to none for three states (Connecticut, New Jersey, and Rhode Island). Health care providers in Wisconsin received the second-largest disbursement, approximately $18.5 million (almost 5.7 percent) of all primary Rural Health Care Program funding. For a snapshot of funding to applicants by state and territory, see appendix II, which contains the numbers of applicants and amounts committed by state for 2008. Table 1 shows the total amount of money that has been committed and disbursed to applicants, by state, over the program’s history. Figure 8 shows the total dollar amount disbursed across the United States for funding year 2008, by ZIP code, illustrating the wide variation in geographic use and the heavy concentration of funding in Alaska. According to FCC and USAC staff, health care providers in Alaska dominate use of the primary Rural Health Care Program because Alaska’s rural areas often require expensive satellite telecommunications services. Alaska’s vast size, harsh winter weather, and sparse population make fiber networks and other technologies either too expensive or too infeasible. Some wireless technologies also can be challenging, since Alaskan terrain often includes mountains or forests that can obstruct line-of-sight transmission. As a result, satellite is often the most feasible option for many rural communities in Alaska. Although the cost of telecommunications service in rural areas can vary considerably, satellite service can cost up to $13,000 per month, creating a significant difference in urban and rural rates in parts of Alaska, and making FCC’s Rural Health Care Program particularly attractive under such circumstances. We also found that, according to USAC data, FCC and USAC have been successful in disbursing committed funds in the primary Rural Health Care Program. Table 1 shows that USAC generally disburses most of the funds that are committed to rural health care providers. Of the more than $380 million committed for the program, over $327 million (over 86 percent) has been disbursed, leaving just over $53 million that has been committed but not disbursed since the program began. Some of this $53 million in remaining money will eventually be disbursed as USAC closes more recent funding years. A needs assessment is crucial to both the effective design of new programs and the assessment of existing programs. The primary purpose of a needs assessment is to identify needed services that are lacking (in this case, telecommunications services for rural health care providers) relative to some generally accepted standard. By establishing measures of comparison, program managers can more accurately determine how well their programs are doing in meeting the needs of the targeted population of the program. We have previously recommended that needs assessments include the following characteristics: benchmarks to define when needs have increased or decreased, a plan to determine how needs assessment results will be prioritized in supporting resource allocation decisions, and integration of information on other resources available to help address the need. However, throughout its 12 years of managing the program, FCC has not conducted a comprehensive needs assessment to learn how the program can best target the telecommunications needs of rural health care providers within the broad latitude provided by Congress in the 1996 Act. When designing the $400 million annual spending cap for the Rural Health Care Program, FCC officials noted the scarcity of information available about the universe of eligible providers, and what it might cost to meet the providers’ telecommunications needs. As our analysis showed, the current $400 million spending cap is not based on meaningful estimates of program participation. FCC stated in its 1997 report and order that the Rural Health Care Program spending cap is “based on the maximum amount of service that we have found necessary and on generous estimates of the number of potentially eligible rural health care providers.” FCC acknowledged at the time that it expected actual program disbursements to be less than the cap for a number of reasons. Although FCC expected program disbursement to be under $400 million annually, on multiple occasions, FCC has released documents stating that the primary Rural Health Care Program is underutilized. For example, in its 2006 pilot program order, FCC states that the primary Rural Health Care Program “continues to be greatly underutilized and is not fully realizing the benefits intended by the statute and our rules. In 1997, we authorized $400 million per year for funding of this program. Yet, in each of the last 10 years, the program generally has disbursed less than 10 percent of the authorized funds.” When we asked FCC officials what acceptable utilization of the program would mean, they said that they did not know, but that program utilization would include disbursing funds somewhere between 10 percent and 100 percent of the allowable cap. FCC’s repeated claim that the program is underutilized, without a more specific vision of what utilization would mean, is troublesome. No needs assessment has been conducted to show that the program is, in fact, underutilized. A comprehensive needs assessment could provide useful information to FCC to help officials envision acceptable program utilization—that is, how many providers actually need services, rather than just how many providers are eligible to participate under program rules. As part of our review, we interviewed knowledgeable stakeholders to identify potential reasons for FCC’s reported underutilization. These reasons include the following: Some health care providers lack the infrastructure (e.g., the broadband facilities needed to support telemedicine) to use advanced telecommunications services. The application process is too complex and cumbersome to justify participation. The 25 percent Internet subsidy is not large enough to encourage participation. The difference between urban and rural telecommunications rates is negligible or not significant enough to justify resources toward program participation. Rural health care providers do not have enough administrative support to apply to the program annually. Some eligible health care providers may not know about the program. Statutory restrictions prevent support to certain providers who might benefit from the program (e.g., emergency medical technicians). Some health care providers cannot afford expensive telemedicine equipment; therefore, they are not concerned with gaining access to the telecommunications services needed to use that equipment. Some Medicare and Medicaid rules, including reimbursement limitations, may inhibit the use of telemedicine technologies; therefore, health care providers may not be concerned with gaining access to the telecommunications services needed to support those technologies. Despite these and other issues, we were also told that many of the current program participants are dependent on the benefits they receive from the primary Rural Health Care Program. Although it lacks a needs assessment, FCC has made multiple changes to the primary Rural Health Care Program over time in an attempt to address underutilization and better meet providers’ needs. For example: In a 2003 report and order, FCC provided support for rural health care providers to obtain a 25 percent discount off the cost of monthly Internet access services. The 2003 report and order states: “Because participation in the rural health care support mechanism has not met the Commission’s initial projections, we amend our rules to improve the program, increase participation by rural health care providers, and ensure that the benefits of the program continue to be distributed in a fair and equitable manner.” In a 2004 report and order, FCC changed the definition of “rural,” revised its rules to expand funding for mobile rural health care services, and allowed a 50 percent subsidy (rather than 25 percent) for Internet access services for health care providers in entirely rural states. According to USAC, this report and order increased the number of health care providers eligible to participate in the primary Rural Health Care Program by adding new rural areas while grandfathering health care providers in areas no longer defined as rural. In a 2006 order, FCC announced the pilot program, which will be discussed in greater detail in the next section of this report. FCC created the pilot program to address two potential reasons for the primary Rural Health Care Program’s possible underutilization: lack of infrastructure and access to dedicated broadband networks. Without a needs assessment, however, FCC does not have key information regarding the extent to which any of these reasons actually impacted the primary Rural Health Care Program’s participation rate. FCC officials told us that the changes FCC has made to the program were based primarily on information gathered through the agency’s notice an d comment procedures and internal deliberations. FCC officials told us th this is how FCC—as a federal regulatory agency—conducts its business pursuant to the Administrative Procedure Act (APA). nothing in the APA process that would have precluded FCC from s to conducting a formal needs assessment. Using data-based assessment supplement the information gained through FCC’s regulatory procedures al would enhance FCC’s ability to fulfill its role as the manager of the Rur Health Care Program. Specifically, if FCC had obtained data through a formal needs assessment, it may have been able to more accurately ascertain why some rural health care providers are not participating, and have better ensured that programmatic changes achieved th results. Administrative Procedure Act, 5 U.S.C. §§ 551 et seq. FCC missed multiple opportunities to collaborate with USAC, federal agencies, and other knowledgeable stakeholders when designing the pilot program. These stakeholders all could have provided useful insights into FCC’s design of the pilot program. Such consultations could have helped FCC better identify potential pitfalls in its pilot program design as well as meaningful opportunities to leverage federal resources and ensure that the pilot program targeted rural health care providers’ needs in the most efficient way. Although USAC officials had 9 years’ experience working with the rural health care community and administering the primary Rural Health Care Program, FCC did not consult with USAC officials prior to issuing the 2006 order calling for applications to the pilot program. Our prior work has noted the importance of involving stakeholders (including third-party administrators like USAC) when designing, implementing, and evaluating programs. FCC officials stated that they did not consult with USAC because USAC does not formulate policy. However, USAC’s experience with the primary Rural Health Care Program may have provided FCC with valuable insights into how to design a pilot program, particularly regarding the administrative processes and forms. For example, FCC’s decision to use the primary program’s forms and processes for the pilot program led to a complicated administrative process, particularly since some aspects of the primary Rural Health Care Program’s forms and administrative processes were ill-suited to the pilot program. Because FCC used primary program forms rather than creating new and more tailored ones for the pilot program, the forms required complicated attachments. According to our survey of pilot project representatives, of the 57 respondents that expressed an opinion, 38 respondents rated assembling their request for proposals (RFP) package (Form 465 package) as “very difficult” or “somewhat difficult.” In addition, 27 of the 42 respondents that provided an opinion rated assembling their requests for funding (Form 466-A packages) as “very difficult” or “somewhat difficult.” Solix officials agreed that pilot participants seemed to have difficulty in completing these forms and attachments. FCC also missed opportunities to coordinate with other federal agencies when designing the pilot program. We have noted that a lack of collaboration among federal agencies can lead to a patchwork of programs that can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. A number of federal agencies are involved in telemedicine efforts, and some provide funds to health care providers that could complement FCC’s pilot program. For example, the Health Resources and Services Administration (HRSA), the primary federal agency for improving access to health care services for people who are uninsured, isolated, or medically vulnerable, administers a Telehealth Network Grant Program that provides funds to projects to demonstrate how telehealth programs and networks can improve access to quality health care services in underserved rural and urban communities. However, with USDA being the one exception, FCC did not contact other federal agencies prior to announcing the pilot program in 2006. FCC officials told us that after announcing the creation of the pilot program in 2006, they met with representatives from various agencies within HHS in 2007, to discuss coordination. Representatives from some of these agencies reported that these meetings were primarily informational, with FCC explaining its pilot program to them, and that no strategies for collaboration or follow-up were developed. USDA officials stated that FCC officials met with them prior to announcing the pilot program to discuss USDA’s Distance Learning and Telemedicine Program, including how USDA scored applications and evaluated the program. However, it is unclear how FCC used the information that USDA provided, since similar information was not provided in FCC’s call for applications or order selecting pilot projects. According to federal and other stakeholders, officials at other agencies also could have provided FCC with an understanding of rural health care providers’ needs, potential information technology (IT) issues, and how to design a more user-friendly program and helped FCC identify additional appropriate service providers, one of which had to petition to be included. FCC also did not request public comment on its proposed design for the pilot program. Although FCC did request comments in 2004 on providing some infrastructure support by funding upgrades to the public switched or backbone networks, FCC did not imply that it was considering a pilot program to fund the creation of private networks, or provide specific details on how such a program would operate. We have previously reported that FCC’s use of NPRMs to pose broad questions without providing actual rule text can limit stakeholders’ ability to determine either what action FCC is considering or what information would be most helpful to FCC when developing a final rule. FCC officials said that they did not issue a Notice of Inquiry (NOI) regarding the pilot program because the process would have delayed the pilot program. However, providing the public with advance notice of proposed changes and an opportunity to comment on them is desirable in that it allows agencies, according to a 2006 resource guide, to “find out earlier rather than later about views and information adverse to the agency’s proposal or bearing on its practicality.” Similarly, in comments submitted to FCC, the National Telecommunications Cooperative Association, observed that “interested or affected parties had no opportunity to explore with the Commission various aspects of the Pilot Program.” In addition, industry concerns regarding the funding of redundant networks arose after the implementation of the pilot program. If FCC had provided a more detailed explanation of the proposed pilot program and requested comment prior to establishing the program, it may have been better prepared to address these concerns. FCC called for applications to participate in the pilot program before it fully established pilot program requirements. This, along with the addition of requirements as the pilot program has progressed, has led to delays and difficulties for pilot participants. Most importantly, the entire pilot program itself has been delayed. Participants may issue multiple RFPs as they progress through various stages of designing and constructing their networks, but the deadline for pilot participants to submit all of their requests for funding (projects submit at least one Form 466-A for each RFP they issue) to USAC was June 30, 2010. On February 18, 2010, FCC extended this deadline by 1 year, to June 30, 2011. According to USAC data, at the time of the extension, projects had requested 11 percent of the roughly $418 million in total program funding. As of July 31, 2010, projects had requested 17 percent of the total program funding. As shown in figure 9, as of July 31, 2010, 28 projects (45 percent) have received at least one funding commitment letter, but 18 projects (29 percent) had not yet posted an RFP. According to our survey, delayed and inconsistent guidance led to delays for many pilot projects. In addition, it appears pilot participants have struggled with requirements that were added at the same time that FCC announced the pilot participant selections, such as the need to obtain letters of agency. Figure 10 indicates the number of survey respondents reporting whether they experienced certain issues during the course of their project, and the number of respondents that reported they were delayed by that issue. Table 2 reports the results from our survey question that asked pilot participants to rate the ease or difficulty of performing various program tasks. Four of the tasks rated as “very difficult” or “somewhat difficult” by more than half of the respondents that provided an opinion fall into one of two categories: the task is associated with program processes and forms that were carried over from the primary Rural Health Care Program (Form 465 and Form 466-A), or the task is a requirement FCC added, but that was not mentioned in the initial call for applications (developing a sustainability plan and obtaining letters of agency). In addition, when asked to list the top three things that program officials should change if FCC established a new, permanent program with goals similar to those of the pilot program, simplifying or improving the administrative process was the most frequently mentioned issue. The Grant Accountability Project, a 2005 Domestic Working Group chaired by the former United States Comptroller General, notes that an agency’s ability to ensure that funds are used as intended is impacted when the terms, conditions, and provisions in award agreements are not well- written. The group also notes that a thorough assessment of proposed projects can reduce the risk that money may be wasted or projects may not achieve intended results. However, FCC did not fully establish the requirements of the pilot program before it requested applications, and it required projects to provide additional information after they were accepted into the program. This led to delays because (1) participants needed additional guidance on how to meet the requirements and (2) Solix staff (under USAC direction) had to retroactively review projects to determine the eligibility of the participating entities and activities. In addition, some participants faced difficulties in funding ineligible expenses. In contrast, other federal agencies generally provide extensive detail on program rules when calling for applications for competitive funding programs, including the criteria by which applications will be judged and how the criteria will be weighted. For example, USDA’s 2010 Distance Learning and Telemedicine Program Grant Application Guide provides potential applicants with specific information on eligible uses for the funds, eligible match funding, copies of the forms to be used, and information on the application scoring process. FCC’s 2006 order establishing the pilot program and calling for applications did not provide detailed information on many essential aspects of the program, including which entities would be eligible to participate in the program (FCC provided a legal citation, but no actual text); which expenses could be paid for with program funds; and how projects could fund their 15 percent match. After issuing its call for applications, FCC did provide some of this information on its Frequently Asked Questions Web page on the pilot program. However, this information may not have reached all interested parties, and it would have been more efficient to determine these issues in advance of requesting applications. FCC also provided some of this information in its 2007 order; however, by this time, FCC was also announcing which projects were selected. FCC did not fully screen applications to determine the extent to which their proposed activities and entities would be eligible for funding. Thus, several of the accepted projects had ineligible components. Specifically, based on survey respondents that provided a substantive answer: 25 of 59 respondents included an entity that was determined to be ineligible, 25 of 57 respondents included an expense that was determined to be 10 of 59 respondents relied on ineligible sources to fund their match. The lack of established criteria and an in-depth screening prior to announcing pilot project awards led to a lengthy process by which Solix staff (under USAC direction) determine the eligibility of entities postaward. Pilot participants must submit documentation for every entity in their project, which Solix staff then review to determine eligibility. According to our survey, 36 of 60 respondents were delayed by difficulties in compiling and submitting the documentation needed to establish entity eligibility. In addition, although 39 survey respondents rated the current program guidance regarding entity eligibility as “very clear” or “somewhat clear”; some confusion remains, as 20 pilot participants rated the current guidance as “slightly clear” or “not at all clear.” FCC’s 2007 order also notes that program administration costs, such as personnel, travel, legal, marketing, and training costs, are ineligible for program funding. Pilot participants have indicated in written comments to FCC that they did not anticipate that administrative costs would not be eligible for funding, and some have faced challenges in funding these costs themselves. In our survey, 36 of 51 respondents indicated that funding ineligible costs, including administrative costs, has been “very difficult” or “somewhat difficult.” In addition, when asked to list the top three things that program officials should change if FCC established a new, permanent program with goals similar to those of the pilot program, providing funding for administrative costs was the second-most frequently mentioned issue. FCC’s 2007 order also introduced new requirements that were not mentioned in the 2006 order. For example, while the 2006 order states that the pilot program would use the same forms and administrative processes used in the primary Rural Health Care Program, the 2007 order also requires projects to secure a letter of agency from every entity participating in a project. This letter authorizes the lead project coordinator to act on the signing agency’s behalf. Since a number of the selected pilot program participants included providers that were also participating in another participant’s proposed network, FCC noted that the letter of agency would demonstrate that the entity has agreed to participate in the network and prevent improper duplicate support for providers participating in multiple networks. Considering that FCC encouraged applicants to create statewide networks, some projects have hundreds of participating entities, creating the need for such projects to secure hundreds of letters of agency. The letter of agency requirement has proven extremely time-consuming and resource-intensive for some projects. According to our survey, 34 of 60 respondents faced difficulties in securing letters of agency, and 27 of these respondents were delayed because of these difficulties. (See fig. 10.) Similarly, according to our survey results, FCC has not provided sufficient guidance to pilot projects on how to meet FCC’s requirement that projects comply with HHS health IT initiatives. In response to a letter from HHS, FCC outlined a number of requirements in its 2007 selection order that pilot program participants should meet to ensure that their pilot projects were consistent with HHS health IT initiatives. FCC officials stated that the explanation of the requirements in its 2007 selection order, in addition to a guidance document created in 2008, provided guidance for the pilot projects in how to meet these requirements. However, one HHS official described the language in the 2007 selection order as vague and in need of an update. In addition, the 2008 guidance document has not been revised to reflect new developments in interoperability specifications and certification programs. Currently, pilot participants are required to explain in each quarterly report how they are complying with the HHS health IT requirement. However, 34 of 47 survey respondents who provided an opinion stated the guidance provided on how to meet these requirements was “slightly sufficient” or “not at all sufficient.” Following the release of the 2007 order, FCC created additional requirements as the program progressed and did not provide program guidance in a timely manner on how to meet these requirements. For example, FCC stated in its 2007 award order that selected pilot participants generally “provided sufficient evidence that their proposed networks will be self-sustaining by the completion of the pilot program.” However, program officials began requiring more detailed sustainability plans in the fall of 2008, after some projects had gone through the competitive bidding process and had requested funding commitment letters from USAC. Outside of an October 24, 2008, letter to USAC in which FCC noted that participants should “disclose all sources or potential sources of revenue that relate to the network” and intentions to sell or lease excess capacity in the project’s sustainability plan, FCC did not provide any other written guidance on what specific information should be included in participant’s sustainability plans until April 2009. At that time, FCC posted an item to its Frequently Asked Questions Web page that suggested more information to be included in a participant’s sustainability plan, including status of obtaining the match, projected sustainability period, network membership agreements, ownership structure, sources of future support, and management structure. USAC and Solix officials noted that some pilot participants believed that because their application was accepted by FCC, they met all of the program requirements for sustainability. In some cases, this misunderstanding led to confusion and disagreements between the pilot participants and program officials regarding the need for additional information and the amount of time that a sustainability plan should cover. Moreover, it appears some confusion remains. When asked to rate their satisfaction with any guidance they received thus far on how to develop a sustainability plan, 21 of the 60 survey respondents that provided an opinion were “very dissatisfied” or “somewhat dissatisfied.” In addition, 39 of 59 survey respondents faced difficulties in developing a sustainability plan; 33 of these respondents stated that difficulties in developing a sustainability plan have delayed their projects. (See fig. 10.) In addition, because FCC is responsible for all policy decisions regarding the pilot program, unique or difficult situations are typically referred from USAC to FCC for its decision. The need for such consultations is compounded by an absence of formal written guidance for USAC and pilot participants. We have reported that information should be recorded and communicated to management and others who need it in a form and within a time frame that enables them to carry out their responsibilities. FCC staff stated that, in some cases, they have not provided written guidance because they want the pilot program to remain flexible. However, in some cases, it has taken several months for FCC to make a decision or provide guidance on issues. However, it appears that pilot participants are dissatisfied with certain elements of program guidance. As noted in figure 11, of the 59 respondents that provided an opinion, 32 respondents were “very dissatisfied” or “somewhat dissatisfied” with the clarity of program guidance, and 28 respondents were “very dissatisfied” or “somewhat dissatisfied” with the amount of formal written guidance. In addition, as we note in figure 10, 37 of 60 respondents stated that they had received inconsistent guidance. Similarly, when asked to list the top three things that program officials should change if FCC established a new, permanent program with goals similar to those of the pilot program, providing more guidance and templates was the third-most frequently mentioned issue. In addition, although FCC recognized in its 2006 order calling for applications that ineligible entities may be participating in the networks and would need to pay their fair share of costs, FCC chose not to establish detailed guidance on how to address such issues prior to establishing the program. Instead, FCC provided guidance as questions arose from USAC (USAC’s first request about how to determine fair share was in June 2007 when it was noted that a substantial number of ineligible entities were included in applications submitted to FCC) and from pilot program participants. According to USAC officials, questions concerning payment of fair share or incremental costs for excess capacity shared with ineligible entities occurred at the pilot program training in February 2008 and continued from participants to USAC and from USAC to FCC throughout 2008. The most recent FCC guidance was a March 2009 matrix outlining nine scenarios in which excess capacity could be used. However, issues regarding excess capacity remain. FCC’s July 2010 NPRM notes that “rules governing the sharing of this subsidized infrastructure are necessary to prevent waste, fraud and abuse,” and requested comment on a number of detailed questions regarding the sharing of excess capacity with ineligible entities, different methods for allocating costs among entities, providing excess capacity for community use, and what types of guidance are needed. According to our survey, 21 of 60 respondents indicated that their project “definitely” or “probably” will include excess capacity. Although there have been some challenges, many pilot participants emphasized the importance of the pilot program in their responses to our survey as well as in comments submitted to FCC. According to our survey, if pilot participants are able to accomplish their pilot project goals: 55 of 57 respondents indicated their project “definitely” or “probably” will have entities that obtain telecommunications or Internet services that would otherwise be unaffordable; 48 of 55 respondents indicated their project “definitely” or “probably” will have entities obtain telecommunications or Internet services that would otherwise be unobtainable due to lack of infrastructure; and 58 of 59 respondents indicated that their project “definitely” or “probably” will have entities upgrade an existing telecommunications or Internet service. In addition, when asked to consider their current understanding of the costs and administrative requirements of participating in the pilot program, 52 of 57 respondents reported that the pilot program’s benefits will outweigh the costs of participating in the program. Pilot participants were also generally positive about the usefulness of program officials, in particular their coach. When asked to list the top three things that program officials did well in their administration of the program, respondents provided positive opinions about their communications with program officials, the effort put forth by program officials, and the coaches or the coaching concept. These responses are consistent with those provided to another question that rated program officials—be they FCC, USAC, or a project coach—as the most useful resource for pilot participants. (See fig. 12.) Pilot participants were also satisfied with their coaches as a source of information. USAC appointed coaches to serve as a project’s direct point of contact with program officials. Of the 61 respondents who provided an opinion, some specifically noted their satisfaction with the ease with which they could contact their coach (53 respondents were “very satisfied” or “somewhat satisfied”) and the level of interaction with their coach (49 respondents were “very satisfied” or “somewhat satisfied”). Coaches were rated somewhat lower on their knowledge of the program (40 respondents were “very satisfied” or “somewhat satisfied”), although this lower rating may be related to the lack of established guidance at the beginning of the program and the need to refer difficult issues to Solix management, USAC, and FCC, depending on the complexity of the issue. In its July 2010 NPRM, FCC proposed a new Health Infrastructure Program that would make available up to $100 million per year to support up to 85 percent of the construction costs of new regional or statewide networks for health care providers in areas of the country where broadband is unavailable or insufficient. In this NPRM, FCC made improvements over previous NPRMs by outlining potential program requirements and requesting comment on the proposed new program. FCC provided much more information than it did when announcing the pilot program and is allowing for stakeholder input into the program’s design. In addition, FCC recognized some of the challenges mentioned in this report and requested comment on potential improvements. In particular, FCC proposed and requested comment on requiring that applicants prove or otherwise certify that broadband at minimum connectivity speeds is unavailable or insufficient to meet their health care needs when applying to the program; requiring applicants to submit letters of agency as part of their application, rather than after they are accepted into the program; having USAC review entity eligibility; providing limited funding for administrative costs; expanding entity eligibility to include off-site administrative offices, off- site data centers, nonprofit skilled nursing facilities, and nonprofit renal dialysis facilities; and providing additional guidance regarding the funding and permitted uses of excess capacity and the allocation of related costs. However, it remains unclear the extent to which FCC is coordinating with USAC in preparing for this program. The NPRM indicates that USAC will develop a user-friendly Web-based application for participants to use. However, during our conversations with USAC, officials noted that it would take a considerable amount of time and effort to properly develop such systems. FCC indicates in its NPRM that the new programs could be implemented by funding year 2011. If FCC does not better plan the details of the new program before it calls for applications, participants in the Health Infrastructure Program may experience the same delay and difficulties as participants have experienced in the pilot program. We have previously reported that results-oriented organizations commonly perform a number of key practices to effectively manage program performance. In particular, results-oriented organizations implement two key practices, among others, to lay a strong foundation for successful program management. First, these organizations set performance goals to clearly define desired outcomes. Second, these organizations develop performance measures that are clearly linked to the program goals. However, FCC has reversed these two key practices. In 2006, 8 years after FCC first implemented the primary Rural Health Care Program, the Office of Management and Budget (OMB) assessed FCC’s Rural Health Care Program and concluded that the program had no performance goals and measures. In 2007, FCC issued a report and order adopting performance measures for the Rural Health Care Program related to USAC’s processing of applications, paying invoices, and determining appeals. However, FCC stated that it did not have sufficient data to establish performance goals for the Rural Health Care Program in the report and order. Instead of specific performance-related goals, the Rural Health Care Program has operated for 12 years under broad overarching goals, including the statutory goal established by Congress in the 1996 Act, which is to ensure that rural health care providers receive telecommunications services at rates comparable for the same services in urban areas. Furthermore, the performance measures that FCC adopted for the primary Rural Health Care Program and the pilot program in 2007 fall short when compared with the key characteristics of successful performance measures that we have identified in our past work. Following is a discussion of these characteristics and the extent to which FCC has fulfilled them in developing performance measures: Measures should be tied to goals and demonstrate the degree to which the desired results are achieved. These program goals should, in turn, be linked to overall agency goals. However, as we have previously discussed, the measures that FCC has adopted are not based on such linkage because no specific performance goals have been established. By establishing performance measures before establishing the specific performance goals that it seeks to achieve through the Rural Health Care Program, FCC may waste valuable time and resources collecting the wrong data. FCC receives the data for these performance measures on a quarterly basis from USAC, but without effective performance goals to guide its data collection, it cannot ensure that the data gained from these performance measures are an effective use of resources. Measures should address important aspects of program performance. For each program goal, a few performance measures should be selected that cover key performance dimensions and take different priorities into account. For example, measures should be limited to core program activities because an excess of data could obscure rather than clarify performance issues. Performance measures should also cover key governmentwide priorities, such as timeliness and customer satisfaction. FCC’s performance measures appear to address certain key performance dimensions. By selecting just three types of measures—related to USAC’s (1) processing of applications, (2) paying invoices, and (3) determining appeals—there are fewer chances of obscuring the most important performance issues. The measures also appear to take into account such priorities as timeliness and customer satisfaction. For example, the 2007 performance measures include requirements to measure the number of current and pending appeals, and the time that it takes to resolve those appeals. However, again, without first setting specific performance goals defining what the programs are specifically intended to accomplish, FCC cannot be sure that it has adopted the most appropriate performance measures. Measures should provide useful information for decision making. Performance measures should provide managers with timely, action- oriented information in a format that helps them to make decisions that improve program performance. However, the data collected by these performance measures—such as the number of applications submitted, rejected, and granted—are output, not outcome, oriented. The FCC task force that developed the National Broadband Plan also reported that the performance measures developed for the Rural Health Care Program need to be improved to assess desired program outcomes, such as the impact of the program on patient care. The limited nature of the data obtained by current performance measures, combined with the absence of specific performance goals, raises concerns about the effectiveness of these performance measures for programmatic decision making. FCC is attempting to improve its performance management by seeking public comment on performance goals and measures for the Rural Health Care Program in its July 2010 NPRM. For example, FCC proposed a specific measure of how program support is being used: that is, requiring beneficiaries to annually identify the speed of the connections supported by the program and the type and frequency of the use of telemedicine applications as a result of broadband access. Although this is a positive step, the NPRM does not specify whether this data collection would be linked to specific connection speed goals that participants should obtain with program funds, and it does not propose what the goal should be for type and frequency of the use of telemedicine applications. While this NPRM could lead to better goals and measures for the Rural Health Care Program, FCC has exhibited a pattern of repeatedly seeking comment on goals and measures for the Rural Health Care Program, which indicates that it does not have a clear vision for what it intends the program to accomplish within the broad statutory framework provided by Congress. FCC has sought public comment on performance goals and measures for the program on two previous occasions that did not result in effective performance goals and measures for the program: In June 2005, FCC issued a NPRM seeking comment on whether specific performance goals were needed and on ways to establish useful outcome, output, and efficiency measures for each of the universal service programs, including the Rural Health Care Program. FCC officials stated that this NPRM led to the 2007 performance measures that we have previously described. In September 2008, FCC issued a NOI seeking comment on how to more clearly define the goals of the Universal Service Fund programs, including the Rural Health Care Program, and to identify any additional quantifiable performance measures that may be necessary or desirable. FCC officials stated that this NOI led to the July 2010 NPRM, which, again, requests comment on performance goals and measures. Performance goals and measures are particularly important for the Rural Health Care Program, because they could help FCC to make well-informed decisions about how to address the trends that we have previously described. If FCC does use information from the latest NPRM to develop specific performance goals and measures, it should focus on the results that it expects its programs to achieve. We have identified the following practices for developing successful performance goals and measures: create a set of performance goals and measures that addresses important dimensions of a program’s performance and balance competing priorities, use intermediate goals and measures to show progress or contribution to include explanatory information on the goals and measures, develop performance goals to address mission-critical management show baseline and trend data for past performance, identify projected target levels of performance for multiyear goals, and link the goals of component organizations to departmental strategic goals. Clearly articulated, outcome-based performance goals and measures are important to help ensure that the Rural Health Care Program meets the guiding principles that Congress has set forth. After implementing the key performance management practices that we have previously discussed—establishing effective performance goals and measures—results-oriented organizations implement a third, key practice: that is, evaluating the performance of their programs. Measuring performance allows these organizations to track progress toward goals and provides managers with the crucial performance data needed to make management decisions. We have previously reported that performance data can have real value only when used to identify the gap between a program’s actual performance level and the performance level identified as its goal. Again, without specific performance goals and effective performance measures, FCC cannot identify program performance gaps and is unlikely to conduct evaluations that are useful for formulating policy decisions. FCC has not formally evaluated the performance of the primary Rural Health Care Program to determine whether it is meeting the needs of rural health care providers, and it may lack the tools to evaluate the pilot program—such as an effective progress reporting mechanism and an evaluation plan. To its credit, FCC has stated that it intends to evaluate the pilot program after its completion. However, it is unclear whether FCC has effective evaluation tools for conducting a pilot program evaluation that will be useful for making policy decisions about the future of the Rural Health Care Program. To track the progress of pilot projects, FCC requires pilot program participants to complete quarterly reports that are filed with FCC and USAC, but it is unclear whether these reports are effective tools for evaluating pilot program performance for the following reasons: Quarterly report data are not quantitative. Quarterly reports collect data that are mostly qualitative (e.g., a narrative description of a project’s network and how the network will be sustained) instead of quantitative. While qualitative data can help officials understand project progress on an individual basis, the information is not objective or easily measured. FCC has not involved key stakeholders. We have previously reported that stakeholder and customer involvement helps agencies to ensure that efforts and resources are targeted at the highest priorities. However, key stakeholders and pilot participants are not involved in ensuring that quarterly reports are providing the most useful information possible. Pilot program coaches, who guide pilot participants through the program’s administrative processes, and USAC officials said that FCC has not told them how the reports will be used to evaluate pilot program progress. USAC and the pilot program coaches work directly with participants and without understanding how these reports will be used, they are unable to effectively guide participants into providing the most useful evaluation information possible. Additionally, of the 45 pilot program survey respondents that provided an opinion, 26 said that they receive too little feedback on their quarterly reports. Quarterly reports may require too much information. Of the 58 pilot program survey respondents that provided an opinion, 28 said that too much information is required in quarterly reports. As we have previously discussed, an excess of data can obscure rather than clarify performance. FCC officials told us that they have learned lessons from using these quarterly reports, and that, as part of the 2010 NPRM, FCC requested public comment on a similar reporting requirement for the proposed Health Infrastructure Program. Furthermore, despite FCC’s intentions to evaluate the pilot program, officials have not yet developed an evaluation plan for the pilot program. FCC officials told us that this is because the pilot program is still under way, and that FCC will plan the evaluation when the pilot program is closer to completion (as we previously stated, the deadline for participants in the pilot program to select a vendor and request a funding commitment from USAC is June 30, 2011). However, we have previously reported that when conducting pilot programs, agencies should develop sound evaluation plans before program implementation—as part of the design of the pilot program itself—to increase confidence in results and facilitate decision making about broader application of the pilot program. We have previously identified the following key features of sound evaluation plans: well-defined, clear, and measurable objectives; measures that are directly linked to specific program objectives; criteria or standards for determining program performance; clearly articulated methodology and a strategy for comparing results with a clear plan that details the type and source of data necessary to evaluate the program, methods for data collection, and the timing and frequency of data collection; a detailed data-analysis plan to track the program’s performance and evaluate its final results; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. The lack of a documented evaluation plan for the pilot program increases the likelihood that FCC will not collect appropriate or sufficient data, which limits understanding of pilot program results. Without this understanding, FCC will be limited in its decision making about the pilot program’s potential broader application to FCC’s proposed future programs. The National Broadband Plan states that for all four universal service fund programs, including the Rural Health Care Program, “there is a lack of adequate data to make critical policy decisions regarding how to better utilize funding to promote universal service objectives.” FCC has not effectively followed the three key performance management practices discussed in this report and has not obtained the data that it needs to make critical policy decisions and successfully manage the program. Furthermore, FCC has proposed two new programs under the Rural Health Care Program in its 2010 NPRM (the Health Broadband Services Program and the Health Infrastructure Program), even though the National Broadband Plan states that FCC does not have the data to make critical policy decisions on how to better use its funds. In our previous work, we have reported that results-oriented organizations recognize that improvement goals should flow from a fact-based performance analysis. However, the proposed improvements to the Rural Health Care Program are not based on a fact-based performance analysis because the performances of the primary Rural Health Care Program and the pilot program have not been evaluated. FCC officials told us that they believe the proposals set forth in the July 2010 NPRM are “positive first steps” toward creating improvements to performance analysis. Because FCC has not determined what the primary Rural Health Care Program and the pilot program are specifically intended to accomplish and how well the programs are performing, it remains unclear how FCC will make informed decisions about the new programs described in the July 2010 NPRM. Moreover, as new technologies are developed, measuring the performance and effectiveness of existing programs is important so that decision makers can design future programs to effectively incorporate new technologies, if appropriate. If FCC does not institute better performance management tools—by establishing effective performance goals and measures, and planning and conducting effective program evaluations—FCC’s management weaknesses will likely continue to affect the current Rural Health Care Program, and will likely carry forward into the design and operation of proposed Rural Health Care programs. Over the first 12 years of its Rural Health Care Program, FCC has distributed more than $327 million to rural health care providers to assist them in purchasing telecommunications and information services. FCC and USAC have been particularly successful in disbursing committed funds in the primary Rural Health Care Program, and FCC has generally seen slow but steady growth in both the amounts of annual disbursements and the number of annual applicants to the primary program. However, since the Rural Health Care Program’s inception, FCC has not provided the program with a solid performance management foundation. FCC could better inform its decision making and improve its stewardship of the Rural Health Care Program by incorporating effective performance management practices into its regulatory processes. FCC has not conducted a comprehensive needs assessment to determine the needs of rural health care providers, has no specific goals and measures for the program to guide its management decisions, and has not evaluated how well the program is performing. FCC’s attempts to improve the program over time, including the 2006 pilot program, have not been informed by a documented, fact-based needs assessment; consultations with knowledgeable stakeholders, including other government agencies; and performance evaluations. Despite FCC’s efforts to improve the program, a significant number of eligible rural health care providers currently do not use the primary Rural Health Care Program, and FCC’s management of the pilot program has often led to the delays and difficulties reported by pilot participants. We found that a number of rural health care providers depend on the support they receive from the primary Rural Health Care Program, and that most pilot program participants are seeking services that they believe would have been otherwise unaffordable. It is possible that FCC’s proposed changes to the Rural Health Care Program will increase participation by rural health care providers, thus increasing the amount of funding committed by the Rural Health Care Program and, ultimately, increasing the universal service fees paid by consumers on their telephone bills. Changes in FCC’s approach to performance management could help ensure that higher telephone bills are justified; that program resources are targeting the needs of rural health care providers; and that the program, in fact, is helping our nation to realize more widespread use of telemedicine technologies. To improve its performance management of the Rural Health Care Program, we recommend that the Chairman of the Federal Communications Commission take the following five actions. If FCC does develop any new rural health care programs under the Universal Service Fund—such as the proposed Health Care Broadband Access Fund and the Health Care Broadband Infrastructure Fund—these steps should be taken before implementing any new programs or starting any new data collection efforts: Conduct an assessment of the current telecommunications needs of rural health care providers. Consult with USAC, other federal agencies that serve rural health care providers (or with expertise related to telemedicine), and associations representing rural health care providers to incorporate their knowledge and experience into improving current and future programs. Develop effective goals, and performance measures linked to those goals, for all current and future programs. Develop and execute a sound performance evaluation plan for the current programs, and develop sound evaluation plans as part of the design of any new programs before implementation begins. For any new program, ensure that FCC’s request for applications to the program clearly (1) articulates all criteria for participating in the program and any weighting of that criteria, (2) details the program’s rules and procedures, (3) outlines the program’s performance goals and measures, and (4) explains how participants’ progress will be evaluated. We provided a draft of this report to the Federal Communications Commission and the Universal Service Administrative Company for their review and comment. In its written comments, FCC did not specifically agree or disagree with our recommendations but discussed planned and ongoing actions to address them. FCC agreed that it should continue to examine and work to improve the Rural Health Care Program to ensure that the program is effectively and efficiently achieving its statutory goals. In response to our first recommendation that FCC conduct an assessment of the current needs of rural health care providers, FCC stated that it is gathering information about health care needs, including needs assessments performed by other governmental agencies. FCC also stated that going forward, it is committed to developing benchmarks to define when needs have increased or decreased, applying needs assessment results to resource allocation decisions, and integrating information from other resources available to help address the need. FCC’s efforts to obtain information and assessments from other agencies and stakeholders are encouraging. We continue to believe, however, that FCC would benefit from conducting its own assessment of the telecommunications needs of the rural health care providers eligible under its Rural Health Care Program. In response to our second recommendation that FCC consult with stakeholders and incorporate their knowledge into improving current and future programs, FCC stated that it is committed to maximizing collaboration efforts with federal and other knowledgeable stakeholders and that it will work closely with USAC to prepare for the new program. FCC included in its comments an October 2010 statement from the Office of the National Coordinator for Health IT, Department of Health and Human Services, about collaborative efforts with FCC and other federal agencies. In response to our third recommendation that FCC develop effective performance goals and measures, FCC concurred with the need to develop quantifiable performance measures. However, FCC did not specifically state whether it concurred with our recommendation to develop effective goals and to link performance measures to those goals. We continue to believe that FCC should develop program performance goals first, and then develop performance measures and link them to those goals. In response to our fourth recommendation that FCC develop and execute effective performance evaluation plans for the current and future programs, FCC stated that it intends to conduct an evaluation of the pilot program after it is concluded. While FCC did not address evaluation of the current primary program, it stated that for any future enhancements to the program, it is committed to developing and executing sound performance evaluation plans, including key features that we identified in our report. In response to our fifth recommendation that FCC identify critical program information, such as criteria for funding, and prioritization rules in its call for applications for any new programs, FCC stated that the July 2010 NPRM discusses these elements in detail. While we appreciate FCC’s efforts to better detail proposed programs in its NPRM, we continue to believe that FCC should detail the requirements for participation in the call for applications to any future programs. FCC’s full comments are reprinted in appendix III. In its written comments, USAC stated that it will work with FCC to implement any orders or directives that FCC issues in response to our recommendations. USAC’s full comments are reprinted in appendix IV. USAC also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chairman of the Federal Communications Commission, the Acting Chief Executive Officer of the Universal Service Administrative Company, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Our objectives were to address the following questions: (1) How has the Federal Communications Commission (FCC) managed the primary Rural Health Care Program to meet the needs of rural health care providers, and how well has the program addressed those needs? (2) How have FCC’s design and implementation of the pilot program affected participants? and (3) What are FCC’s performance goals and measures for the Rural Health Care Program, and how do these goals compare with the key characteristics of successful performance goals and measures? We conducted the following background research that helped inform all of our reporting objectives. Specifically, we reviewed: prior GAO reports on other Universal Service Fund programs; FCC’s Universal Service Monitoring Reports on the Rural Health Care documentation from FCC and the Universal Service Administrative Company (USAC) on the structure and operation of the Rural Health Care Program and pilot program; and FCC documents, including FCC orders and requests for comment on the Universal Service Fund programs, as well as written comments submitted in response to these requests. In addition, we interviewed: officials from FCC’s Office of Managing Director and Wireline Competition Bureau to identify actions undertaken to address previously identified problems and plans to address issues of concern in the programs and officials from USAC’s Rural Health Care Division and Solix, Inc., to collect information on program operations and USAC’s actions to implement prior FCC orders on the primary Rural Health Care Program and pilot program. To evaluate how the primary Rural Health Care Program was managed to meet the needs of rural health care providers, we examined trends in the demand for and use of primary Rural Health Care Program funding from data we obtained from USAC’s Packet Tracking System (PATS), which is used to keep track of primary Rural Health Care Program applications, and the Simplified Invoice Database System (SIDS), which is used to keep track of program disbursements. When analyzing and reporting on the data, we considered the limitations on how data can be manipulated and retrieved from both the PATS and SIDS databases since these systems were designed to keep track of applications and finances and not to be data retrieval systems. We assessed the reliability of the data by questioning officials about controls on access to the system and data back- up procedures. Additionally, we reviewed the data sets provided to us for obvious errors and inconsistencies. On the basis of this assessment, we determined that the data were sufficiently reliable to describe broad trends in the demand for and use of Rural Health Care Program funding. We obtained the following data—including annual and cumulative figures—for funding years 1998 through 2009: the number and characteristics of applicants, including their entity type, the type of service requested, and location; the dollar amount of funding commitments and disbursements by entity type, type of service requested, and state; the number of commitments and disbursements by state; and the amount of money committed but not disbursed by entity type and type of service requested. To provide these data, USAC performed queries on the PATS and SIDS systems and provided the resulting reports to us in July 2010. Data from both systems can change on a daily basis as USAC processes applications for funding and reimbursement, applicants request adjustments to requested or committed amounts, and other actions are taken. As a result, the data we obtained and reported on reflect the program status at the time that USAC produced the data, and thus may be somewhat different if we were to perform the same analyses with data produced at a later date. To assess how well the primary Rural Health Care Program addressed the needs of rural health care providers, we interviewed FCC and USAC officials to determine how the program was designed to address rural health care provider needs. We reviewed relevant documentation, including FCC orders, notices of proposed rulemaking, and FCC’s National Broadband Plan. We also reviewed comments and reply comments to the record to gain insight into the public perception of how the program was addressing needs. Furthermore, we interviewed representatives from stakeholder groups, including the American Telemedicine Association, the National Organization of State Offices of Rural Health, the National Rural Health Association, the Center for Telehealth and E-Health Law, and the National Telecommunications Cooperative Association, to gain their perspective on the primary Rural Health Care Program. To obtain information on how FCC’s design and implementation of the pilot program affected participants, we conducted a Web-based survey of pilot projects. For a more complete tabulation of the survey results, see the e-supplement to this report. To develop the survey questionnaire, we reviewed comments submitted to FCC by representatives from the pilot projects and other stakeholders in response to FCC requests for feedback on the pilot program. We also interviewed pilot project representatives who were in various stages of the pilot program processes as well as FCC, USAC, Solix, and stakeholder groups knowledgeable about the program and issues of concern to participants. We designed draft questionnaires in close collaboration with GAO survey specialists. We conducted pretests with four pilot projects that were in various stages of the pilot program processes to help further refine our questions, develop new questions, and clarify any ambiguous portions of the survey. We conducted these pretests in person and by telephone. In addition, we had FCC and USAC review the survey prior to it being sent to the pilot participants. We sent our survey to all 61 of the pilot projects that had recent contact information on file with USAC, as of June 2, 2010. We excluded the Puerto Rico project because at the time of our survey, it was the only project that had been withdrawn from the program for an extended period of time; thus, although we tried, locating a contact with knowledge of the program was not possible. Our goal was to obtain responses from individuals with knowledge of and experience with the tasks related to the pilot program—such as preparing forms and responding to information requests—for each sampled entity. Our data set included the name and contact information for each project’s project coordinator and associate project coordinator. We asked USAC coaches to identify who they interacted with the most on each project (project coordinator, associate project coordinator, or someone else), and we sent the survey to that individual. If that individual was unable to complete the survey, we asked the other contact (project coordinator, associate project coordinator, or someone else) to complete the survey. One respondent was the primary point of contact for two projects, but a separate survey was completed for each project. We notified the 61 preidentified contacts on June 2, 2010, by e-mail that the survey was about to begin and updated contact information as needed. We launched our Web-based survey on June 8, 2010, and asked for responses to be submitted by June 18. Log-in information was e-mailed to all participants. We contacted by telephone and e-mailed those who had not completed the questionnaire at multiple points during the data collection period, and we closed the survey on July 2, 2010. All 61 projects submitted a completed questionnaire with usable responses for an overall response rate of 100 percent. We also followed up with certain projects on the basis of survey responses to gain additional information about plans for using excess capacity, as well as the extent to which the project was impacted by federal coordination with the pilot program. While all pilot projects were selected for our survey, and, therefore, our data are not subject to sampling errors, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. As we previously indicated, we collaborated with GAO survey specialists to design draft questionnaires, and versions of the questionnaire were pretested with four members of the surveyed population. In addition, we provided a draft of the questionnaire to FCC and USAC for their review and comment. From these pretests and reviews, we made revisions as necessary to reduce the likelihood of nonresponse and reporting errors on our questions. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error and addressed such issues, where possible. A second, independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. In addition, GAO analysts answered respondent questions and resolved difficulties that respondents had in answering our questions. For certain questions that asked respondents to provide a narrative answer, we created content categories that covered more than 90 percent of the narrative responses provided, and asked two analysts to independently code each response into one of the categories. Any discrepancies in the coding of the two analysts were discussed and addressed by the analysts. To determine the extent to which FCC coordinated with other federal agencies when designing and implementing the pilot program, we interviewed FCC officials regarding the nature of their coordination with other agencies, and followed up with representatives from other federal agencies, including the Department of Health and Human Services (Agency for Healthcare Research and Quality, Centers for Disease Control and Prevention, Centers for Medicare and Medicaid Services, Health Resources and Services Administration, Indian Health Service, National Library of Medicine, Office of the Assistant Secretary for Preparedness and Response, and Office of the National Coordinator for Health Information Technology); the U.S. Department of Agriculture’s Rural Utilities Service; and the Department of Commerce’s National Telecommunications and Information Administration. We reviewed relevant documentation and assessed the extent to which FCC coordinated with other agencies against criteria for coordination established in prior GAO reports. To determine the performance goals and measures of the Rural Health Care Program and how these measures compare with the key characteristics of successful performance measures, we reviewed the Telecommunications Act of 1996. We then reviewed our past products and science and evaluation literature to identify effective practices for setting performance goals and measures. We compared this information with the program goals and measures that FCC set forth in agency documentation—including FCC orders, notices of proposed rulemaking, strategic plans, and performance and accountability reports. We also reviewed the Office of Management and Budget’s Program Assessment Rating Tool 2006 report on the Rural Health Care Program’s effectiveness. In addition, we interviewed officials from FCC’s Wireline Competition Bureau and Office of Managing Director, and officials from USAC’s Rural Health Care Division to obtain their views on plans to implement Rural Health Care Program performance goals and measures. Mark L. Goldstein, (202) 512-2834 or goldsteinm@gao.gov. In addition to the contact named above, Faye Morrison (Assistant Director), Elizabeth Curda, Lorraine Ettaro, Cheron Green, Amy Higgins, Crystal Huggins, Catherine Hurley, Linda Kohn, Armetha Liles, Elizabeth Marchak, Valerie Melvin, John Mingus Jr., Sara Ann Moessbauer, Charlotte Moore, Joshua Ormond, Madhav Panwar, Carl Ramirez, Amy Rosewarne, Cynthia Saunders, Michael Silver, Teresa Tucker, and Mindi Weisenbloom made key contributions to this report.
Telemedicine offers a way to improve health care access for patients in rural areas. The Federal Communications Commission's (FCC) Rural Health Care Program, established in 1997, provides discounts on rural health care providers' telecommunications and information services (primary program) and funds broadband infrastructure and services (pilot program). GAO was asked to review (1) how FCC has managed the primary program to meet the needs of rural health care providers, and how well the program has addressed those needs; (2) how FCC's design and implementation of the pilot program affected participants; and (3) FCC's performance goals and measures for both the primary program and the pilot program, and how these goals compare with the key characteristics of successful performance goals and measures. GAO reviewed program documents and data, interviewed program staff and relevant stakeholders, and surveyed all 61 pilot program participants with recent participation in the program. FCC has not conducted an assessment of the telecommunications needs of rural health care providers as it has managed the primary Rural Health Care Program, which limits FCC's ability to determine how well the program has addressed those needs. Participation in the primary program has increased, and some rural health care providers report that they are dependent on the support received from the program. For example, a provider in Alaska has used program funds to increase the use of telemedicine, which has reduced patient wait times and travel costs. FCC has been successful in disbursing over 86 percent of all committed funds. However, FCC has disbursed only $327 million in total over the 12 years of the primary program's operation-- less than any single year's $400 million funding cap. FCC has frequently stated that the primary program is underutilized and has made a number of changes to the program, including the creation of the pilot program. Currently, FCC is proposing to replace portions of the primary program with a new broadband services program. However, without a needs assessment, FCC cannot determine how well the current program is targeting those needs--and whether the program is, in fact, underutilized--or ensure that a new program will target needs any better. FCC's poor planning and communication during the design and implementation of the pilot program caused delays and difficulties for pilot program participants. FCC did not consult with the program's administrator, other federal agencies, or relevant stakeholders prior to announcing the program, nor did it request public comment on its design. In addition, FCC called for applications to participate in the pilot program before it fully established pilot program requirements. FCC added additional program requirements after the pilot program began, and survey respondents indicated that program guidance was not provided in an effective manner. Despite these difficulties, most participants were positive about the assistance provided by program officials and reported that the benefits they anticipate receiving from the pilot program outweigh the costs of participating. However, the entire program has been delayed and projects have struggled to meet requirements that were not clearly defined at the beginning of the program. FCC has not developed specific performance goals for the Rural Health Care Program and has developed ineffective performance measures. The performance measures are limited for a number of reasons, the most important of which is that FCC has set no specific performance goals to which to link them. In addition, FCC has not evaluated the performance of the primary Rural Health Care Program and has no evaluation plan for the pilot program. Without reliable performance information, FCC does not have the data that it needs to make critical policy decisions about the overall Rural Health Care Program. If FCC does not correct these deficits in performance management, it may perpetuate the same performance management weaknesses in its stewardship of the new rural health care programs that it has proposed. GAO recommends that the FCC Chairman assess rural health care providers' needs, consult with knowledgeable stakeholders, develop performance goals and measures, and develop and execute sound performance evaluation plans. In its comments, FCC did not agree or disagree with the recommendations, but discussed planned and ongoing actions to address them.
The Army’s conversion to a modular force encompasses the Army’s total force—active Army, Army National Guard, and Army Reserve—and directly affects not only the Army’s combat units, but related command and support organizations. A key to the Army’s new modular force design is embedding within combat brigades battalion-sized, reconnaissance, logistics, and other support units that previously made up parts of division-level and higher-level command and support organizations, allowing the brigades to operate independently. Restructuring these units is a major undertaking because it requires more than just the movement of personnel or equipment from one unit to another. The Army’s new modular units are designed, equipped, and staffed differently than the units they replace; therefore successful implementation of this initiative will require changes such as new equipment and a different mix of skills and occupational specialties among Army personnel. By 2011, the Army plans to have reconfigured its total force—to include active and reserve components and headquarters, combat, and support units—into the modular design. The foundation of the modular force is the creation of modular brigade combat teams—combat maneuver brigades that will have a common organizational design and will increase the rotational pool of ready units. Modular combat brigades will have one of three standard designs—heavy brigade combat team, infantry brigade combat team, and Stryker brigade combat team. Until it revised its plans in March 2006, the Army had planned to have a total of 77 active component and National Guard modular combat brigades by expanding the existing 33 combat brigades in the active component into 43 modular combat brigades by 2007, and by creating 34 modular combat brigades in the National Guard by 2010 from existing brigades and divisions that have historically been equipped well below requirements. To rebalance joint ground force capabilities the 2006 QDR determined the Army should have a total of 70 modular combat brigades—42 active brigades and 28 National Guard brigades. Also in March 2006, the Army was in the process of revising its modular combat brigade conversion schedule; it now plans to convert its active component brigades by fiscal year 2010 instead of 2007 as previously planned, and convert National Guard brigades by fiscal year 2008 instead of 2010. As of March 2006 the Army had completed the conversion of 19 active component brigades to the modular design and was in the process of converting 2 active and 7 National Guard brigades. Table 1 shows the Army’s schedule as of March 2006 for creating active component and National Guard modular combat brigades. According to the Army, this larger pool of available combat units will enable it to generate both active and reserve component forces in a rotational manner that will support 2 years at home following each deployed year for active forces. To do this, the Army has created a rotational force generation model in which units rotate through a structured progression of increased unit readiness over time. Units will progress through three phases of operational readiness cycles, culminating in full mission readiness and availability to deploy. The Army’s objective is for the new modular combat brigades, which will include about 3,000 to 4,000 personnel, to have at least the same combat capability as a brigade under the current division-based force, which range from 3,000 to 5,000 personnel. Since there will be more combat brigades in the force, the Army believes its overall combat capability will be increased as a result of the restructuring, providing added value to combatant commanders. Although somewhat smaller in size, the new modular combat brigades are expected to be as capable as the Army’s existing brigades because they will have different equipment, such as advanced communications and surveillance equipment, and a different mix of personnel and support assets. The Army’s organizational designs for the modular brigades have been tested by its Training and Doctrine Command’s Analysis Center against a variety of scenarios, and the Army has found the new designs to be as capable as the existing division-based brigades in modeling and simulations. The Army’s cost estimate for modularity has continued to evolve since our September 2005 report. As we reported, the Army’s cost estimate for transforming its force through fiscal year 2011 increased from $28 billion in the summer of 2004 to $48 billion in the spring of 2005. The latter estimate addressed some of the shortcomings of the initial rough order of magnitude estimate and included lessons learned from operations in Iraq. For example, it included costs of restructuring the entire force, to include 77 brigade combat teams, as well as the creation of support and command units. However, it excluded some known costs. For example, the $48 billion estimate did not include $4.5 billion in construction costs the Army plans to fund through business process engineering efficiencies, which historically have been difficult to achieve. The Army added these costs when it revised its cost estimate in March 2006, bringing the most recent total to $52.5 billion. As shown in table 2, most of the planned funding for modularity—$41 billion, or about 78 percent—will be used to procure equipment, with the remaining funds divided between military construction and facilities and sustainment and training. In addition, Army leaders have recently stated they may seek additional funds after 2011 to procure additional equipment for modular restructuring. In our September report, we highlighted uncertainties related to force design, equipment, facilities, and personnel that could drive costs higher. Some of these uncertainties have been clarified. For example, we noted that costs in equipment and facilities would increase significantly if the Secretary of Defense decided to add 5 brigades to the Army’s active component to create a total of 48 brigade combat teams—a decision that was scheduled to be made in fiscal year 2006. The decision about the number of brigades was made based on the QDR. Instead of a 5 brigade combat team increase, the report stated that the Army would create a total of 42 such brigades in the active component, a 1 brigade combat team reduction from the Army’s plan. In addition, the number of National Guard brigade combat teams was reduced from 34 to 28. In sum, the QDR decisions reduced the number of planned brigade combat teams from 77 to 70. However, Army officials stated that the Army plans to fully staff and equip these units. Moreover, Army officials told us that the Army plans to use resources freed up by this decision to increase support units in the reserve component and to fund additional special operations capability in the active component. We also noted in our September 2005 report that the Army had not completed designs for all the support units at the time the estimate was set. According to Army officials, these designs have been finalized. Despite these refinements to the design and changes to the planned number of combat and support brigades, the Army has not made revisions to its $52.5 billion cost estimate or funding plan based on these changes. Moreover, as I will discuss shortly, uncertainty remains in the Army’s evolving strategy for equipping its modular combat brigades. As a result, based on discussions with Army officials, it remains unclear to what extent the $41 billion will enable the Army to equip units to levels in the Army’s tested design. In addition, it is not clear how the Army will distinguish between modularity, costs associated with restoring equipment used in operations, or modernizing equipment. In estimating its equipment costs for modularity, the Army assumed that some equipment from ongoing operations would remain in operational condition for redistribution to new and restructured modular units. To the extent equipment is not returned from operations at assumed rates, it is not clear how this will affect equipping levels of modular units or how the Army would pay for such equipment. As a result, the Secretary of Defense and Congress may not be in a sound position to weigh competing demands for funding and assess whether the Army will be able to fully achieve planned capabilities for the modular force by 2011 within the planned funding level. The Army has made progress in creating active component modular combat brigades, but it is not meeting its equipping goals for these brigades and is still developing its overall equipping strategy, which raises concerns about the extent to which brigades will be equipped in the near and longer term. While active brigades are receiving significant amounts of new equipment, Army officials indicated that they may seek additional funding for equipment beyond 2011. Moreover, brigades will initially lack key equipment, including items that provide enhanced intelligence, situational awareness, and network capabilities needed to help the Army achieve its planned capabilities of creating a more mobile, rapidly deployable, joint, expeditionary force. In addition, because of existing equipment shortages, the Army National Guard will likely face even greater challenges providing the same types of equipment for its 28 planned modular combat brigades. To mitigate equipment shortages, the Army plans to provide priority for equipment to deploying active component and National Guard units but allocate lesser levels of remaining equipment to other nondeploying units based on their movement through training and readiness cycles. However, the Army has not yet determined the levels of equipment it needs to support this strategy, assessed the operational risk of not fully equipping all units, or provided to Congress detailed information about these plans so it can assess the Army’s current and long-term equipment requirements and funding plans. The Army faces challenges meeting its equipping goals for its modular brigades both in the near and longer term. As of February 2006, the Army had converted 19 modular combat brigades in the active force. According to the Army Campaign Plan, which established time frames and goals for the modular force conversions, each of these units individually is expected to have on hand at least 90 percent of its required major equipment items within 180 days after its new equipment requirements become effective. We reviewed data from several brigades that had reached the effective date for their new equipment requirements by February 2006, and found that all of these brigades reported significant shortages of equipment 180 days after the effective date of their new equipment requirements, falling well below the equipment goals the Army established in its Campaign Plan. Additionally, the Army is having difficulty providing equipment to units undergoing their modular conversion in time for training prior to operational deployments, and deploying units often do not receive some of their equipment until after their arrival in theater. At the time of our visits, officials from three Army divisions undergoing modular conversion expressed concern over the lack of key equipment needed for training prior to deployment. The Army already faced equipment shortages before it began its modular force transformation and is wearing out significant quantities in Iraq, which could complicate plans for fully equipping new modular units. By creating modular combat brigades with standardized designs and equipment requirements, the Army believed that it could utilize more of its total force, thereby increasing the pool of available and ready forces to meet the demands of sustained rotations and better respond to an expected state of continuous operations. Also, by comparably equipping all of these units across the active component and National Guard, the Army further believes it will be able to discontinue its practice of allocating limited resources, including equipment, based on a system of tiered readiness, which resulted in lower-priority units in both active and reserve components having significantly lower levels of equipment and readiness than the higher priority units. However, because of the need to establish a larger pool of available forces to meet the current high pace of operational commitments, the Army’s modular combat brigade conversion schedule is outpacing the planned acquisition or funding for some equipment requirements. The Army has acknowledged that funding does not match its modular conversion schedule and that some units will face equipment shortages in the early years of transformation. The Army says it will manage these shortfalls; however, according to Army officials, the Army may continue to seek modular force equipment funding beyond 2011 and may exceed its $52.5 billion modularity cost estimate. Active modular combat brigades will initially lack required numbers of some of the key equipment that Army force design analyses determined essential for achieving their planned capabilities. Army force designers identified a number of key organizational, personnel, and equipment enablers they determined must be present for the modular combat brigades to be as lethal as the division-based brigades they are replacing, achieve their expected capabilities, and function as designed. Essential among these is the equipment that will enable the modular combat brigades to function as stand-alone, self-sufficient tactical forces, capable of conducting and sustaining operations on their own if required without also deploying large numbers of support forces. They include battle command systems to provide modular combat brigades the latest command and control technology for improved situational awareness; advanced digital communications systems to provide secure high-speed communications links; and advanced sensors, providing modular combat brigades their own intelligence-gathering, reconnaissance, and target acquisition capabilities. We reviewed several command and control, communications, and reconnaissance systems to determine the Army’s plans and timelines for providing active modular combat brigades some of the key equipment they need to achieve their planned capabilities and function as designed. According to Army officials responsible for managing the distribution and fielding of equipment, in 2007 when 38 of 42 active component modular combat brigades are to complete their modular conversions, the Army will not have all of this equipment onhand to meet the new modular force design requirements. These shortfalls are due to a range of reasons, but primarily because the modular conversion schedule is outpacing the planned acquisition or funding. For example, the Army does not expect to meet until at least 2012 its modular combat brigade requirements for Long- Range Advanced Scout Surveillance Systems, an advanced visual sensor that provides long-range surveillance capability to detect, recognize, and identify distant targets. In addition, because of an Army funding decision, the Army only plans to meet 85 percent of its requirements across the force for Single Channel Ground and Airborne Radio Systems, a command and control network radio system that provides voice and data communications capability in support of command and control operations. Finally, a recent DOD decision could set back the Army’s schedule for the acquisition of Joint Network Node, a key communications system that provides secure high-speed computer network connection for data transmission down to the battalion level, including voice, video, and e- mail. According to Army officials, DOD recently decided to require the Army to have Joint Network Node undergo developmental and operational testing prior to further acquisition, which could delay equipping active and National Guard modular combat brigades. In addition to the challenges the Army faces in providing active component modular combat brigades the equipment necessary for meeting expected capabilities, the Army will face greater challenges meeting its equipping requirements for its 28 planned National Guard combat brigades. The Army’s modular force concept is intended to transform the National Guard from a strategic standby force to a force that is to be organized, staffed, and equipped comparable to active units for involvement in the full range of overseas operations. As such, Guard combat units will enter into the Army’s new force rotational model in which, according to the Army’s plans, Guard units would be available for deployment 1 year out of 6 years. However, Guard units have previously been equipped at less than wartime readiness levels (often at 65 to 75 percent of requirements) under the assumption that there would be sufficient time for Guard forces to obtain additional equipment prior to deployment. Moreover, as of July 2005, the Army National Guard had transferred more than 101,000 pieces of equipment from nondeploying units to support Guard units’ deployments overseas. As we noted in our report last year on National Guard equipment readiness, National Guard Bureau officials estimated that the Guard’s nondeployed units had only about 34 percent of their essential warfighting equipment as of July 2005 and had exhausted inventories of 220 critical items. Although the Army says it plans to invest $21 billion into equipping and modernizing the Guard through 2011, Guard units will start their modular conversions with less and much older equipment than most active units. This will add to the challenge the Army faces in achieving its plans and timelines for equipping Guard units at comparable levels to active units and fully meeting the equipping needs across both components. Moreover, the Army National Guard believes that even after the Army’s planned investment, the Army National Guard will have to accept risk in certain equipment, such as tactical wheeled vehicles, aircraft, and force protection equipment. Because the Army realized that it would not have enough equipment in the near term to simultaneously equip modular combat brigades at 100 percent of their requirements, the Army is developing a new equipping strategy as part of its force rotation model; however, it has not yet determined equipping requirements for this new strategy. Under the force rotation model, the Army would provide increasing amounts of equipment to units as they move through training phases and near readiness for potential deployment so they would be ready to respond quickly if needed with fully equipped forces. The Army believes that over time, equipping units in a rotational manner will enable it to better allocate available equipment and help manage risk associated with specific equipment shortages. Under this strategy, brigades will have three types of equipment sets—a baseline set, a training set, and a deployment set. The baseline set would vary by unit type and assigned mission and the equipment it includes could be significantly reduced from the amount called for in the modular brigade design. Training sets would include more of the equipment units will need to be ready for deployment, but units would share the equipment that would be located at training sites throughout the country. The deployment set would include all equipment needed for deployment, including theater- specific equipment, high-priority items provided through operational needs statements, and equipment from Army prepositioned stock. With this cyclical equipping approach, the Army believes it can have from 12 to 16 active combat brigades and from 3 to 4 Army National Guard combat brigades equipped and mission ready at any given time. However, the Army has not yet determined equipping requirements for units as they progress through the rotational cycles. While the Army has developed a general proposal to equip both active and Army National Guard units according to the readiness requirements of each phase of the rotational force model, it has not yet detailed the types and quantities of items required in each phase. We noted in our October 2005 report on Army National Guard equipment readiness that at the time of the report, the Army was still developing the proposals for what would be included in the three equipment sets and planned to publish the final requirements in December 2005. However, as of March 2006 the Army had not decided on specific equipping plans for units in the various phases of its force rotation model. Because the Army is early in the development of its rotational equipping strategy and has not yet defined specific equipping plans for units as they progress through rotational cycles, the levels of equipment the deploying and nondeploying units would receive are currently not clear. Therefore, it is difficult to assess the risk associated with decreasing nondeploying units’ readiness to perform other missions or the ability of units in the earlier stages of the rotational cycle to respond to an unforeseen crisis if required. The Army has made some progress meeting modular personnel requirements in the active component by shifting positions from its noncombat force to its operational combat force but faces significant challenges reducing its overall end strength while increasing the size of its modular combat force. The Army plans to reduce its current end strength of 512,400, based upon a temporary authorized increase, to 482,400 soldiers by 2011 in order to help fund the Army’s priority acquisition programs. Simultaneously, the Army plans to increase the number of soldiers in its combat force from approximately 315,000 to 355,000 in order to meet the increased personnel requirements of its new larger modular force structure. The Army plans to utilize several initiatives to reduce and realign the Army with the aim of meeting these planned manpower levels. For example, the Army has experienced some success in converting nonoperational military positions into civilian positions, thereby freeing up soldiers to fill modular combat brigades’ requirements. During fiscal year 2005, the Army converted approximately 8,000 military positions to civilian-staffed positions within the Army’s institutional force. However, officials believe additional conversions will be more challenging to achieve. In addition to its success with the military-to-civilian conversions, the Army has been given statutory authority to reduce active personnel support to the National Guard and Reserves by 1,500. However, the Army must still eliminate additional positions, utilizing these and other initiatives, so it can reduce its overall end strength while filling requirements for modular units. While the Army is attempting to reduce end strength and realign positions to the combat force via several initiatives, it may have difficulty meeting its expectations for some initiatives. For example, the Army expected that the Base Realignment and Closure (BRAC) decisions of 2005 could free up approximately 2,000 to 3,000 positions in the institutional Army, but the Army is revisiting this assumption based upon updated manpower levels at the commands and installations approved for closure and consolidation. Army officials believe they will be able to realign some positions from BRAC, but it is not clear whether the reductions will free up 2,000 to 3,000 military personnel. In the same vein, Army officials expected to see reductions of several hundred base support staff resulting from restationing forces currently overseas back to garrisons within the United States. However, Army officials are still attempting to determine if the actual savings will meet the original assumptions. In addition, the Army’s new modular force structure increases requirements for military intelligence specialists, but according to Army officials the Army will not be able to fully meet these requirements. The modular force requires the Army to adjust the skill mix of its operational force by adding 8,400 active component intelligence specialist positions to support its information superiority capability—considered a key enabler of modular force capabilities. However, the Army plans to fill only about 57 percent of these positions by 2013 in part because of efforts to reduce overall end strength. According to Army officials, despite these shortfalls, intelligence capability has improved over that of the previous force; however, shortfalls in filling intelligence requirements have stressed intelligence specialists with a high tempo of deployments. However, since intelligence was considered a key enabler of the modular design—a component of the new design’s improved situational awareness—it is unclear how this shortage in planned intelligence capacity will affect the overall capability of modular combat brigades. If the Army is unable to transfer enough active personnel to its combat forces while simultaneously reducing its overall end strength, it will be faced with a difficult choice. The Army could accept increased risk to its operational units or nonoperational units that provide critical support, such as training. Alternatively, the Army could ask DOD to seek an end strength increase and identify funds to pay for additional personnel. However, DOD is seeking to reduce end strength in all the services to limit its personnel costs and provide funds for other priorities. The Army lacks a comprehensive and transparent approach to effectively measure its progress against stated modularity objectives, assess the need for further changes to its modular unit designs, and monitor implementation plans. GAO and DOD, among others, have identified the importance of establishing objectives that can be translated into measurable, results- oriented metrics, which in turn provide accountability for results. In a 2003 report we found that the adoption of a results-oriented framework that clearly establishes performance goals and measures progress toward those goals was a key practice for implementing a successful transformation. DOD has also recognized the need to develop or refine metrics so it can measure efforts to implement the defense strategy and provide useful information to senior leadership. The Army considers the Army Campaign Plan to be a key document guiding the modular restructuring. The plan provides broad guidelines for modularity and other program tasks across the entire Army. However, modularity-related metrics within the plan are limited to a schedule for creating modular units and an associated metric of achieving unit readiness goals for equipment training and personnel by certain dates after unit creation. Moreover, a 2005 assessment by the Office of Management and Budget identified the total number of brigades created as the only metric the Army has developed for measuring the success of its modularity initiative. Another key planning document, the 2005 Army Strategic Planning Guidance, identified several major expected advantages of modularity, including an increase in the combat power of the active component force by at least 30 percent, an increase in the rotational pool of ready units by at least 50 percent, the creation of a deployable joint- capable headquarters, a force design upon which the future network- centric developments can be readily applied, and reduced stress on the force through a more predictable deployment cycle. However, these goals have not translated into outcome-related metrics that are reported to provide decision makers a clear status of the modular restructuring as a whole. Army officials stated that unit creation schedules and readiness levels are the best available metrics for assessing modularity progress because modularity is a reorganization encompassing hundreds of individual procurement programs that would be difficult to collectively assess in a modularity context. While we recognize the complexity of the modular restructuring, we also note that without clear definitions of metrics, and periodic communication of performance against these metrics, the Secretary of Defense and Congress will have difficulty assessing the impact of refinements and enhancements to the modular design, such as changes in the number of modular combat and support brigades reported in the QDR and any changes in resource requirements that may occur as a result of these changes. In fiscal year 2004, TRADOC’s Analysis Center concluded that the modular brigade combat team designs would be more capable than division-based units based on an integrated and iterative analysis employing computer- assisted exercises, subject matter experts, and senior observers. This analysis culminated in the approval of modular brigade-based designs for the Army. The assessment employed performance metrics such as mission accomplishment, units’ organic lethality, and survivability, and compared the performance of variations on modular unit designs against the existing division-based designs. The report emphasized that the Chief of Staff of the Army had asked for “good enough” prototype designs that could be quickly implemented, and the modular organizations assessed were not the end of the development effort. Since these initial design assessments, the Army has been assessing implementation and making further adjustments in designs and implementation plans through a number of venues, to include unit readiness reporting on personnel, equipment, and training; modular force coordination cells to assist units in the conversion process; modular force observation teams to collect lessons during training; and collection and analysis teams to assess units’ effectiveness during deployment. TRADOC has approved some design change recommendations and has not approved others. For example, TRADOC analyzed a Department of the Army proposal to reduce the number of Long-Range Advanced Scout Surveillance Systems, but recommended retaining the higher number in the existing design in part because of decreases in units’ assessed lethality and survivability with the reduced number of surveillance systems. Army officials maintain that ongoing assessments provide sufficient validation that the modularity concept works in practice. However, these assessments do not provide a comprehensive evaluation of the modular design as a whole. Further, the Army does not plan to conduct a similar overarching analysis to assess the modular force capabilities to perform operations across the full spectrum of potential conflict. In November 2005, we reported that methodically testing, exercising, and evaluating new doctrines and concepts is an important and established practice throughout the military, and that particularly large and complex issues may require long-term testing and evaluation that is guided by study plans. We believe the evolving nature of the design highlights the importance of planning for broad-based evaluations of the modular force to ensure the Army is achieving the capabilities it intended, and to provide an opportunity to make course corrections if needed. For example, one controversial element of the design was the decision to include two maneuver battalions instead of three in the brigade combat teams. TRADOC’s 2004 analysis noted that the brigade designs with the two maneuver battalion organization had reduced versatility compared to the three maneuver battalion design, and cited this as one of the most significant areas of risk in the modular combat brigade design. Some defense experts, to include a current division commander and several retired Army generals, have expressed concerns about this aspect of the modular design. In addition, some of these experts have expressed concerns about whether the current designs have been sufficiently tested and whether they provide the best mix of capabilities to conduct full- spectrum operations. In addition, the Army has recently completed designs for support units and headquarters units. Once the Army gets more operational experience with the new modular units, it may find it needs to make further adjustments to its designs. Without another broad-based evaluation, the Secretary of Defense and congressional leadership will lack visibility into the capabilities of the brigade combat teams as they are being organized, staffed, and equipped. The fast pace, broad scope, and cost of the Army’s restructuring to a modular force present considerable challenges for the Army, particularly as it continues to be heavily involved in fighting the Global War on Terrorism. These factors pose challenges to Congress as well to provide adequate oversight of the progress being made on achieving modularity goals and of funds being appropriated for this purpose. In this challenging environment, it is important for the Army to clearly establish and communicate its funding priorities and equipment and personnel requirements and assess the risks associated with its plans. Moreover, it is important for the Army to clearly establish a comprehensive long-term approach for its modular restructuring that reports not only a schedule of creating modular units, but measures of its progress toward meeting its goal of creating a more rapidly deployable, joint, expeditionary force. Without such an approach, the Secretary of Defense and Congress will not have the information needed to weigh competing funding priorities and monitor the Army’s progress in its over $52 billion effort to transform its force. Mr. Chairman and members of the subcommittee, this concludes my prepared remarks. I would be happy to answer any questions you may have. For future questions about this statement, please contact Janet St. Laurent at (202) 512-4402. Other individuals making key contributions to this statement include Gwendolyn Jaffe, Assistant Director; Margaret Best; Alissa Czyz; Christopher Forys; Kevin Handley; Joah Iannotta; Harry Jobes; David Mayfield; Sharon Pickup; Jason Venner; and J. Andrew Walker. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Army considers its modular force transformation the most extensive restructuring it has undertaken since World War II. Restructuring the Army from a division-based force to a modular brigade-based force will require extensive investments in equipment and retraining of personnel. The foundation of the modular force is the creation of standardized modular combat brigades designed to be stand-alone, self-sufficient units that are more rapidly deployable and better able to conduct joint operations than their larger division-based predecessors. GAO was asked to testify on the status of the Army's modularity effort. This testimony addresses (1) the Army's cost estimate for restructuring to a modular force, (2) progress and plans for equipping modular brigade combat teams, (3) progress made and challenges to meeting personnel requirements, and (4) the extent to which the Army has developed an approach for assessing modularity results and the need for further adjusting designs or implementation plans. This testimony is based on previous and ongoing GAO work examining Army modularity plans and cost. GAO's work has been primarily focused on the Army's active forces. GAO has suggested that Congress consider requiring the Secretary of Defense to provide a plan for overseeing spending of funds for modularity. Although the Army is making progress creating modular units, it faces significant challenges in managing costs and meeting equipment and personnel requirements associated with modular restructuring in the active component and National Guard. Moreover, the Army has not provided sufficient information for the Department of Defense and congressional decision makers to assess the capabilities, costs, affordability, and risks of the Army's modular force implementation plans. The Army's cost estimate for completing modular force restructuring by 2011 has grown from an initial rough order of magnitude of $28 billion in 2004 to $52.5 billion currently. Although the Army's most recent estimate addresses some shortcomings of its earlier estimate, it is not clear to what extent the Army can achieve expected capabilities within its cost estimate and planned time frames for completing unit conversions. Moreover, according to senior Army officials, the Army may request additional funds for modularity beyond 2011. Although modular conversions are under way, the Army is not meeting its near-term equipping goals for its active modular combat brigades, and units are likely to have shortfalls of some key equipment until at least 2012. The Army plans to mitigate risk in the near term by providing priority for equipping deployed units and maintaining other units at lower readiness levels. However, it has not yet defined specific equipping plans for units in various phases of its force rotation model. As a result, it is unclear what level of equipment units will have and how well units with low priority for equipment will be able to respond to unforeseen crises. In addition, the Army faces significant challenges in implementing its plan to reduce overall active component end strength from 512,400 to 482,400 soldiers by fiscal year 2011 while increasing the size of its modular combat force from 315,000 to 355,000. This will require the Army to eliminate or realign many positions in its noncombat force. The Army has made some progress in reducing military personnel in noncombat positions through military civilian conversions and other initiatives, but some of its goals for these initiatives may be difficult to meet and could lead to difficult trade-offs. Already the Army does not fully plan to fill some key intelligence positions required by its new modular force structure. Finally, the Army does not have a comprehensive and transparent approach to measure progress against stated modularity objectives and assess the need for further changes to modular designs. The Army has not established outcome-related metrics linked to many of its modularity objectives. Further, although the Army is analyzing lessons learned from Iraq and training events, the Army does not have a long-term, comprehensive plan for further analysis and testing of the designs and fielded capabilities. Without performance metrics and a comprehensive testing plan, neither the Secretary of Defense nor congressional leaders will have full visibility into the capabilities of the modular force as it is currently organized, staffed, and equipped.
Our past work has found that program performance cannot be accurately assessed without valid baseline requirements established at the program start. Without the development, review, and approval of key acquisition documents, such as the mission need statement, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. We have also identified technologies that DHS has deployed that have not met key performance requirements. For example, in June 2010, we reported that over half of the 15 DHS programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, and establishing acquisition program baselines. We made a number of recommendations to help address these issues as discussed below. DHS has generally agreed with these recommendations and, to varying degrees, has taken actions to address them. In addition, our past work has found that DHS faces challenges in identifying and meeting program requirements in a number of its programs. For example: In July 2011, we reported that TSA revised its explosive detection system (EDS) requirements to better address current threats and plans to implement these requirements in a phased approach. However, we reported that only some of the EDSs in TSA’s fleet are configured to detect explosives at the levels established in the 2005 requirements. The remaining EDSs are configured to detect explosives at 1998 levels. When TSA established the 2005 requirements, it did not have a plan with the appropriate time frames needed to deploy EDSs to meet the requirements. To help ensure that EDSs are operating most effectively, we recommended that TSA develop a plan to deploy and operate EDSs to meet the most recent requirements to ensure new and currently deployed EDSs are operated at the levels in established requirements. DHS concurred with our recommendation. In September 2010, we reported that the Domestic Nuclear Detection Office (DNDO) was simultaneously engaged in the research and development phase while planning for the acquisition phase of its cargo advanced automated radiography system to detect certain nuclear materials in vehicles and containers at ports. DNDO pursued the deployment of the cargo advanced automated radiography system without fully understanding the physical requirements of incorporating the system in existing inspection lanes at ports of entry. We reported that this occurred because, during the first year or more of the program, DNDO and CBP had few discussions about operating requirements for primary inspection lanes at ports of entry. DHS spent $113 million on the program since 2005 and canceled the development phase of the program in 2007. In May 2010, we reported that not all of the Secure Border Initiative Network (SBInet) operational requirements that pertain to Block 1 were achievable, verifiable, unambiguous, and complete. For example, a November 2007 DHS assessment found problems with 19 operational requirements, which form the basis for the lower-level requirements used to design and build the system. As a result, we recommended that the Block 1 requirements, including key performance parameters, be independently validated as complete, verifiable, and affordable and any limitations found in the requirements be addressed. DHS agreed with these recommendations and CBP program officials told us that they recognized the difficulties they experienced with requirements development practices with the SBInet program. In January 2011, the Secretary of Homeland Security announced her decision to end the program as originally conceived because it did not meet cost- effectiveness and viability standards. In October 2009, we reported that TSA passenger screening checkpoint technologies were delayed because TSA had not consistently communicated clear requirements for testing the technologies. We recommended that TSA evaluate whether current passenger screening procedures should be revised to require the use of appropriate screening procedures until TSA determined that existing emerging technologies meet its functional requirements in an operational environment. TSA agreed with this recommendation and reported taking actions to address it. Our prior work has also identified that failure to resolve problems discovered during testing can sometimes lead to costly redesign and rework at a later date and that addressing such problems during the testing and evaluation phase before moving to the acquisition phase can help agencies avoid future cost overruns. Specifically: In March 2011, we reported that the independent testing and evaluation of SBInet’s Block 1 capability to determine its operational effectiveness and suitability was not complete at the time DHS reached its decision regarding the future of SBInet or requested fiscal year 2012 funding to deploy the new Alternative (Southwest) Border Technology. We reported that because the Alternative (Southwest) Border Technology incorporates a mix of technology, including an Integrated Fixed Tower surveillance system similar to that currently used in SBInet, the testing and evaluation could have informed DHS’s decision about moving forward with the new technology deployment. In September 2010, we reported that S&T’s plans for conducting operational testing of container security technologies did not reflect all of the operational scenarios that CBP was considering for implementation. We reported that until the container security technologies are tested and evaluated consistent with all of the operational scenarios, S&T cannot provide reasonable assurance that the technologies will function as intended. For example, S&T did not include certain scenarios necessary to test how a cargo container would be transported throughout the maritime supply chain. We recommended that DHS test and evaluate the container security technologies consistent with all the operational scenarios DHS identified for potential implementation. DHS concurred with our recommendation. In October 2009, we reported that TSA deployed explosives trace portals, a technology for detecting traces of explosives on passengers at airport checkpoints, even though TSA officials were aware that tests conducted during 2004 and 2005 on earlier models of the portals suggested the portals did not demonstrate reliable performance in an airport environment. TSA also lacked assurance that the portals would meet functional requirements in airports within estimated costs and the machines were more expensive to install and maintain than expected. In June 2006, TSA halted deployment of the explosives trace portals because of performance problems and high installation costs. We recommended that to the extent feasible, TSA ensure that tests are completed before deploying checkpoint screening technologies to airports. DHS concurred with the recommendation and has taken action to address it, such as requiring more-recent technologies to complete both laboratory and operational tests prior to deployment. Our prior work has shown that cost-benefit analyses help congressional and agency decision makers assess and prioritize resource investments and consider potentially more cost-effective alternatives and that without this ability, agencies are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. For example, we have reported that DHS has not consistently included these analyses in its acquisition decision making. Specifically: In March 2011, we reported that the decision by the Secretary of Homeland Security to end the SBInet program was informed by, among other things, an independent analysis of cost-effectiveness. However, it was not clear how DHS used the results to determine the appropriate technology plans and budget decisions, especially since the results of SBInet’s operational effectiveness were not complete at the time of the Secretary’s decision to end the program. Furthermore, the cost analysis was limited in scope and did not consider all technology solutions because of the need to complete the first phase of the analysis in 6 weeks. It also did not assess the technology approaches based on the incremental effectiveness provided above the baseline technology assets in the geographic areas evaluated. As we reported, for a program of this importance and cost, the process used to assess and select technology needs to be more robust. In October 2009, we reported that TSA had not yet completed a cost- benefit analysis to prioritize and fund its technology investments for screening passengers at airport checkpoints. One reason that TSA had difficulty developing a cost-benefit analysis was that it had not yet developed life-cycle cost estimates for its various screening technologies. We reported that this information was important because it would help decision makers determine, given the cost of various technologies, which technology provided the greatest mitigation of risk for the resources that were available. We recommended that TSA develop a cost-benefit analysis. TSA agreed with this recommendation and has completed a life-cycle cost estimate and collected information for its checkpoint technologies, but has not yet completed a cost-benefit analysis. In June 2009, we reported that DHS’s cost analysis of the Advanced Spectroscopic Portal (ASP) program did not provide a sound analytical basis for DHS’s decision to deploy the portals. We also reported that an updated cost-benefit analysis might show that DNDO’s plan to replace existing equipment with advanced spectroscopic portals was not justified, particularly given the marginal improvement in detection of certain nuclear materials required of advanced spectroscopic portals and the potential to improve the current-generation portal monitors’ sensitivity to nuclear materials, most likely at a lower cost. At that time, DNDO officials stated that they planned to update the cost-benefit analysis. After spending more than $200 million on the program, in February 2010 DHS announced that it was scaling back its plans for development and use of the portals technology. Since DHS’s inception in 2003, we have designated implementing and transforming DHS as high risk because DHS had to transform 22 agencies—several with major management challenges—into one department. This high-risk area includes challenges in strengthening DHS’s management functions, including acquisitions; the impact of those challenges on DHS’s mission implementation; and challenges in integrating management functions within and across the department and its components. Failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. In part because of the problems we have highlighted in DHS’s acquisition process, implementing and transforming DHS has remained on our high- risk list. DHS currently has several plans and efforts underway to address the high-risk designation as well as the more specific challenges related to acquisition and program implementation that we have previously identified. In June 2011, DHS reported to us that it is taking steps to strengthen its investment and acquisition management processes across the department by implementing a decision-making process at critical phases throughout the investment life cycle. For example, DHS reported that it plans to establish a new model for managing departmentwide investments across their life cycles. Under this plan, S&T would be involved in each phase of the investment life cycle and participate in new councils and boards DHS is planning to create to help ensure that test and evaluation methods are appropriately considered as part of DHS’s overall research and development investment strategies. In addition, DHS reported that the new councils and boards it is planning to establish to strengthen management of the department’s acquisition and investment review process would be responsible for, among other things, making decisions on research and development initiatives based on factors such as viability and affordability and overseeing key acquisition decisions for major programs using baseline and actual data. According to DHS, S&T will help ensure that new technologies are properly scoped, developed, and tested before being implemented. DHS also reports that it is working with components to improve the quality and accuracy of cost estimates and has increased its staff during fiscal year 2011 to develop independent cost estimates, a GAO best practice, to ensure the accuracy and credibility of program costs. DHS reports that four cost estimates for level 1 programs have been validated to date. The actions DHS reports taking or has under way to address the management of its acquisitions and the development of new technologies are positive steps and, if implemented effectively, could help the department address many of these challenges. However, showing demonstrable progress in implementing these plans is key. In the past, DHS has not effectively implemented its acquisition policies, in part because it lacked the oversight capacity necessary to manage its growing portfolio of major acquisition programs. Since DHS has only recently initiated these actions, it is too early to fully assess their impact on the challenges that we have identified in our past work. Going forward, we believe DHS will need to demonstrate measurable, sustainable progress in effectively implementing these actions. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Chris Currie, Assistant Director; Bintou Njie; and Michael Kniss. John Hutton; Katherine Trimble; Nate Tranquilli; and Richard Hung also made contributions to this statement. Key contributors for the previous work that this testimony is based on are listed within each individual product. Aviation Security: TSA Has Enhanced Its Explosives Detection Requirements for Checked Baggage, but Additional Screening Actions Are Needed. GAO-11-740. (Washington, D.C.: July 11, 2011). Homeland Security: Improvements in Managing Research and Development Could Help Reduce Inefficiencies and Costs. GAO-11-464T. (Washington D.C.: March. 15, 2011). Border Security: Preliminary Observations on the Status of Key Southwest Border Technology Programs. GAO-11-448T. (Washington D.C.: March 15, 2011). High-Risk Series: An Update. GAO-11-278. (Washington D.C.: February 16, 2011). Supply Chain Security: DHS Should Test and Evaluate Container Security Technologies Consistent with All Identified Operational Scenarios To Ensure the Technologies Will Function as Intended. GAO-10-887. (Washington D.C.: September 29, 2010). Combating Nuclear Smuggling: Inadequate Communication and Oversight Hampered DHS Efforts to Develop an Advanced Radiography System to Detect Nuclear Materials. GAO-10-1041T. (Washington D.C.: September 15, 2010). Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. (Washington, D.C.: June 30, 2010). Secure Border Initiative, DHS Needs to Reconsider Its Proposed Investment in Key Technology Program. GAO-10-340. (Washington, D.C.: May 5, 2010). Secure Border Initiative: DHS Needs to Address Testing and Performance Limitations That Place Key Technology Program at Risk. GAO-10-158. (Washington, D.C.: January 29, 2010). Aviation Security: DHS and TSA Have Researched, Developed, and Begun Deploying Passenger Checkpoint Screening Technologies, but Continue to Face Challenges. GAO-10-128. (Washington, D.C.: October 7, 2009). Combating Nuclear Smuggling: Lessons Learned from DHS Testing of Advanced Radiation Detection Portal Monitors. GAO-09-804T. (Washington, D.C.: June 25, 2009). Combating Nuclear Smuggling: DHS Improved Testing of Advanced Radiation Detection Portal Monitors, but Preliminary Results Show Limits of the New Technology. GAO-09-655. (Washington, D.C.: May 21, 2009). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our past work examining the Department of Homeland Security's (DHS) progress and challenges in developing and acquiring new technologies to address homeland security needs. DHS acquisition programs represent hundreds of billions of dollars in life-cycle costs and support a wide range of missions and investments including border surveillance and screening equipment, nuclear detection equipment, and technologies used to screen airline passengers and baggage for explosives, among others. Since its creation in 2003, DHS has spent billions of dollars developing and procuring technologies and other countermeasures to address various threats and to conduct its missions. Within DHS, the Science and Technology Directorate (S&T) conducts general research and development and oversees the testing and evaluation efforts of DHS components, which are responsible for developing, testing, and acquiring their own technologies. This testimony focuses on the findings of our prior work related to DHS's efforts to acquire and deploy new technologies to address homeland security needs. Our past work has identified three key challenges: (1) developing technology program requirements, (2) conducting and completing testing and evaluation of technologies and (3) incorporating information on costs and benefits in making technology acquisition decisions. This statement will also discuss recent DHS efforts to strengthen its investment and acquisition processes. We have identified technologies that DHS has deployed that have not met key performance requirements. For example, in June 2010, we reported that over half of the 15 DHS programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, and establishing acquisition program baselines. Our prior work has also identified that failure to resolve problems discovered during testing can sometimes lead to costly redesign and rework at a later date and that addressing such problems during the testing and evaluation phase before moving to the acquisition phase can help agencies avoid future cost overruns. Specifically: (1) In March 2011, we reported that the independent testing and evaluation of SBInet's Block 1 capability to determine its operational effectiveness and suitability was not complete at the time DHS reached its decision regarding the future of SBInet or requested fiscal year 2012 funding to deploy the new Alternative (Southwest) Border Technology. (2) In September 2010, we reported that S&T's plans for conducting operational testing of container security technologies did not reflect all of the operational scenarios that CBP was considering for implementation. (3) In October 2009, we reported that TSA deployed explosives trace portals, a technology for detecting traces of explosives on passengers at airport checkpoints, even though TSA officials were aware that tests conducted during 2004 and 2005 on earlier models of the portals suggested the portals did not demonstrate reliable performance in an airport environment. TSA also lacked assurance that the portals would meet functional requirements in airports within estimated costs and the machines were more expensive to install and maintain than expected. In June 2006, TSA halted deployment of the explosives trace portals because of performance problems and high installation costs. Our prior work has shown that cost-benefit analyses help congressional and agency decision makers assess and prioritize resource investments and consider potentially more cost-effective alternatives and that without this ability, agencies are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. For example, we have reported that DHS has not consistently included these analyses in its acquisition decision making. Specifically: (1) In March 2011, we reported that the decision by the Secretary of Homeland Security to end the SBInet program was informed by, among other things, an independent analysis of cost-effectiveness. However, it was not clear how DHS used the results to determine the appropriate technology plans and budget decisions, especially since the results of SBInet's operational effectiveness were not complete at the time of the Secretary's decision to end the program. Furthermore, the cost analysis was limited in scope and did not consider all technology solutions because of the need to complete the first phase of the analysis in 6 weeks. (2) In October 2009, we reported that TSA had not yet completed a cost-benefit analysis to prioritize and fund its technology investments for screening passengers at airport checkpoints. One reason that TSA had difficulty developing a cost-benefit analysis was that it had not yet developed life-cycle cost estimates for its various screening technologies. (3) In June 2009, we reported that DHS's cost analysis of the Advanced Spectroscopic Portal (ASP) program did not provide a sound analytical basis for DHS's decision to deploy the portals.
NPS manages the National Park System—over 80 million acres of land in the nation—comprising a network of natural, historic, and cultural treasures. NPS’ lands are contained in 388 units that include a diverse array of national parks, military parks, national monuments, national historic sites, recreation areas, and other designations. Of the 388 units in the National Park System, 19 are located in the District. As the system’s federal manager, NPS is charged with preserving and protecting these public lands for future generations. NPS’ National Capital Regional Office manages the 19 units in the District as well as other units in surrounding areas in Maryland and Virginia. The National Capital Region has six management units, referred to by NPS as superintendencies, that manages properties in the District. These management units are (1) National Capital Parks-Central (Central), (2) National Capital Parks-East (East), (3) Rock Creek Park (Rock Creek), (4) White House-President’s Park (White House), (5) George Washington Memorial Parkway (GW Parkway), and (6) Chesapeake & Ohio Canal National Historic Park (C&O Canal)—each with a superintendent/director who is responsible for managing properties within his/her jurisdiction. Figure 1 depicts examples of NPS-managed properties in the District, and figure 2 depicts the geographic areas covered by the National Capital Region management units. For the last several decades, GAO, the Department of the Interior’s Inspector General, and Interior itself has reported that NPS did not have an accurate inventory of existing assets, or a reliable estimate of deferred maintenance costs for these assets. In an effort to more effectively manage its assets NPS has developed a new asset management process to enable the agency to have a reliable inventory of its assets and process for reporting on the condition of those assets. The cornerstone of the new asset management process is the Facility Management Software System (FMSS) that allows park, regional office, or NPS headquarters managers to track when, what, and how much maintenance and related costs has been directed at each specific asset. When fully developed and implemented, the new system will, for the first time, enable the agency to have a (1) reliable inventory of its assets, (2) process for reporting on the condition of the assets in its inventory, and (3) consistent, system-wide methodology for estimating the deferred maintenance costs for its assets. Depending on the extent of deferred maintenance required, NPS ranks the assets in FMSS as being in good, fair, poor, or serious condition. Essentially, the lower the estimated deferred maintenance costs, the better the condition of the assets, including properties. In addition, NPS develops general management plans for its units for the purpose of making proactive decisions that address future opportunities at NPS properties, including planning for visitor use, managing the natural and cultural resources on properties, and developing properties. Accordingly, these plans may identify new or expanded recreational opportunities to enhance visitors’ experience. NPS has developed four general management plans—two final and two draft—that encompass some of the properties it manages in the District. However, if it is determined that federal ownership or management of the assets would be better served elsewhere, either the Congress or NPS can take the necessary action, for example, by transferring title or jurisdiction, or through memoranda of agreement, cooperative agreements, or leases. NPS manages 356 federally owned properties in the District covering about 6,735 acres. These properties vary by type and size. Properties in the circles, squares, and triangles category represent the greatest number of NPS-managed properties. Properties in the parks and parkways category comprise the greatest number of the total acres, about 93 percent, managed by NPS. The majority of all NPS-managed properties in the District, however, are less than 1 acre. Appendix II provides additional descriptive information on the 356 NPS-managed properties in the District. NPS manages a diverse array of properties in the District that vary by types and include properties, such as parks, parkways, and triangles. The properties also vary in size, ranging from less than 1 acre to about 1,750 acres. NPS does not classify the properties it manages by categories, nor does it use a standard definition for the types of properties it manages. Nevertheless, NPS typically uses terms that are descriptive of the properties. For example, NPS refers to a triangular-shaped property as a “triangle” and a circular-shaped property as a “circle.” For ease in reporting, we grouped NPS-managed properties into the following four categories: (1) park and parkways; (2) circles, squares, and triangles; (3) Mall/Washington Monument and grounds; and (4) other. Table 1 shows the number of properties in the four categories. The majority of the 356 NPS-managed properties in the District are in the circles, squares, and triangles category, representing about 60 percent of the total properties that NPS manages in the District. Circles, squares, and triangles range in size from 0.01 acres to about 7 acres, with an average size of 0.3 acres. Triangles represent the largest number of properties (178 of the 200 properties) in this category and are managed by the Central, East, and Rock Creek management units. The remaining properties consist of 15 circles and 7 squares. A recent National Capital Planning Commission report refers to the circles, squares, and triangles as designed landscape parks that include fountains, monuments, memorials, and other civic art features. Examples include Dupont Circle, Thomas Circle, Logan Circle, Farragut Square, McPherson Square, and Bolivar Triangle. Figure 3 shows the percent of NPS-managed properties by category. Figure 4 shows the number of properties by component of the circles, squares, and triangles category. Appendix III provides additional information on properties in the circles, squares, and triangles category. There are 95 properties in the parks and parkways category, which contain about 6,246, or 93 percent, of the total acres of NPS-managed property in the District. Figure 5 shows the percentage of the NPS-managed property acres, by category. Three properties—Rock Creek Park, Piney Branch Parkway, and Anacostia Park—contain about 50 percent of the acres associated with properties in this category. The Central, East, and Rock Creek management units manage about 99 percent of the properties and 93 percent of the total acreage in the parks and parkways category. Figure 6 shows the acreage of properties in the parks and parkways category managed by the National Capital Region management units. The majority of the NPS-managed properties are less than 1 acre in size. Of the 356 properties in the District, 242, or about 68 percent, are less than 1 acre in size. Of these, 108 properties are less than one-tenth of an acre. Most of the properties less than 1 acre are managed by the Central and East management units. The most common types of properties less than 1 acre are triangles, center parking, and curb parking. Figure 7 shows the percentage and number of NPS-managed properties by size. The majority of NPS-managed properties in the District have some type of recreational facilities, including sports facilities. To identify what recreational facilities are available on the 356 NPS-managed properties in the District, we asked NPS officials to identify the (1) types of recreational facilities on the properties, such as park benches, outdoor grills, playgrounds, and picnic tables, or shelters and (2) sports-related facilities, such as basketball and tennis courts and baseball and softball fields. NPS officials reported that 202 properties had some type of recreational facility, including 25 properties that had 205 sports facilities. The remaining 154 (of the 356) properties did not have either of the above type facilities. Regardless of whether they have facilities, NPS officials believe that the properties they manage provide some form of recreation. Appendix IV provides information on the properties without either type of recreation facilities. About 60 percent of the NPS-managed properties in the District have some type of recreational facility, including sports facilities. Of the 356 properties, 22 had both sports facilities and other types of recreational facilities, 177 had recreational facilities, exclusive of sports facilities, and 3 had sports facilities only. Table 2 shows the number of properties with the specific type of recreational facilities, exclusive of sports facilities. Twenty-five of the 356 properties have a total of 205 sports facilities, such as tennis courts or softball and baseball fields. About 92 percent of the sports facilities are located on properties in the parks and parkways category, about 6 percent in the Mall/Washington Monument and grounds category, and about 1 percent each in the circles, squares, and triangles and other categories. Figure 8 shows by category, the number of properties containing sports facilities and the total number of sports facilities on those properties. As table 3 shows, 109, or 53 percent, of the 205 sports facilities are tennis and racquetball courts and multiple-use fields. Multiple-use fields are generally areas that can be used to play various sports, such as soccer, football, softball, and baseball. Tennis and racquetball courts total 73, or 36 percent, of the sports facilities, whereas multiple-use fields total 36, or 18 percent. The tennis and racquetball courts are located at 7 different NPS- managed properties while the multiple-use fields are found at 14 different properties. Most of the sports facilities are managed by NPS’ Central, East, and Rock Creek management units. Together these management units managed approximately 98 percent of the total sport facilities. Appendix V provides additional information on properties with sports facilities. According to NPS assessments and information reported in its FMSS database, most NPS-managed properties with sports facilities are in good or fair condition. Information on the condition of each sports facility is generally not identifiable in FMSS and NPS does not have clear criteria to systematically assess sports facility condition. However, during visual inspections, while most properties seem to be well maintained, we identified some sports facilities with deficiencies that pose a potential safety hazard. We asked NPS to provide us with condition assessments recorded in FMSS for each of the 25 properties with sports facilities. FMSS did not provide an assessment of one property’s condition due to security issues. Furthermore, some properties were grouped together into one assessment and other properties received more than one assessment. As a result, NPS provided 22 assessments covering the remaining 24 properties. Figure 9 shows the number of assessments and condition for properties with sports facilities. Appendix VI provides additional information on the condition assessments of properties with sports facilities. We reviewed the documentation supporting the deferred maintenance associated with each of the 24 properties for which we were provided assessments to ascertain if it provided data that could be used to identify the condition of each sports facility. However, this information was of limited use in identifying sports facilities’ conditions. For example, the information, for the most part, discussed such deferred maintenance as repairing or refurbishing landscapes and structures; replacing pavement, concrete curbing, picnic tables, and seawalls; and repainting fences, posts, and benches. This information was not descriptive enough to determine whether the maintenance pertained to the sports facilities or other parts of the property. However, NPS collectively assessed the athletic fields at West Potomac Park, which comprise 8 multiple-use fields, 11 volleyball courts, and 13 baseball fields, to be in poor condition. According to NPS officials, the poor condition assigned to the athletic field’s was due primarily to the deferred maintenance needed for the park’s volleyball courts and multiple- use fields, and not the baseball fields, which are in good condition. NPS identified about $1 million in deferred maintenance costs for refurbishing the volleyball courts, reseeding and sodding the fields, pruning trees, painting fencing, repairing backstops, replacing sidewalk pavement, and upgrading the sewage system. Even in this case, the condition of each sports facility could not be determined, since the assessment was done for the collective group of athletic fields. In reviewing each property’s deferred maintenance documentation, we also determined that the condition of the property or section of property may not be indicative of the condition of any sports facility on the property. For example, while NPS assessed the Fort Reno Park property as being in good condition, 84 percent of the total deferred maintenance needs for this property were directly related to refurbishing the multiple-use athletic fields thereon. Because such a large percentage of the deferred maintenance costs involve refurbishing the athletic fields, it is likely that the condition of the fields is worse than what the assessment showed for the property overall. It should also be noted that NPS assessments do not include determining the condition of structures, such as boat ramps, horse stables, ice and roller skating rinks, and roadways because criteria to assess these structures have not been finalized for use by the park units. As previously identified in table 3, several of the sports facilities on the NPS-managed properties have these structures, again supporting the fact that limited information is available on the condition of sports facilities. We did not have criteria to judge the condition of the sports facilities. However, since the NPS’ information was of limited value in determining their condition, we visually inspected each of the 205 sports facilities to look for obvious deficiencies, such as cracks in the surface of tennis and basketball courts, fallen or missing basketball posts and baskets, and broken backstops around baseball/softball fields. We did note some of these deficiencies, but for the most part, the facilities did not have these conditions and appeared to be well maintained. Figure 10 shows an example of properties that appear to be well maintained. In contrast, figure 11 shows an example of a facility with a cracked surface area. In addition, we identified some conditions at some sports facilities, primarily in Anacostia Park, that posed potential safety hazards to users. See figures 12 through 17 for some examples of these safety hazards. Current NPS general management plans identify opportunities for some expanded recreational uses, such as improving visitor experience by increasing the number and scope of exhibits; creating a regional sports complex; rehabilitating selected baseball/softball fields and basketball and tennis courts; and creating new hiking trails. NPS has developed four general management plans—two final and two drafts—that encompass 37 of the 356 properties and 4,196 of the 6,735 acres it manages in the District. Appendix VII shows additional information on the properties covered by the four general management plans. The two finalized general management plans are for properties that encompass the Fort Circle Parks and the Mary McLeod Bethune Council House National Historic Site. The following are the new or expanded recreational opportunities identified in the final plans: Fort Circle Parks: The primary new recreational opportunity is the creation of a new walking trail that connects most of the forts encompassing the Fort Circle Parks system. These include Forts Dupont, Totten, Stevens, Stanton, Davis, and Reno. The plan also calls for developing a small year- round visitor contact facility near Fort Stevens—where the only battle of the Civil War was fought in the District—to provide a focal point for the Fort Circle Parks system. The facility will offer orientation and interpretation and serve as the start of a driving tour of the forts. In terms of improving existing facilities, the plan calls for rehabilitating selected baseball/softball fields, basketball and tennis courts, picnic areas, and other facilities. In addition, the activity center at Fort Dupont would be developed into an education center to focus on school and community groups offering cultural, historical, natural, and environmental programming. Mary McLeod Bethune Council House National Historic Site: The Mary McLeod Bethune Council House National Historic Site’s plan focuses solely on developing interpretive exhibits on the house and the life of Mary McLeod Bethune, who founded the National Council of Negro Women. The two draft plans are for properties in (1) Rock Creek Park and the Rock Creek and Potomac Parkways and (2) Anacostia Park. The following are the new or expanded recreational opportunities identified in the final plans and the plan’s preferred alternative, if so designated: Rock Creek Park and the Rock Creek and Potomac Parkway: The 2003 draft Rock Creek Park and the Rock Creek and Potomac Parkway’s plan considers closing off the upper part of Beach Drive—a major thoroughfare through the park—during nonrush hours to make it available for nonmotorized recreational uses such as biking, rollerblading, jogging, and walking. The plan also proposes upgrades of existing foot and bridal trails to enhance the visitor’s experience while biking, jogging, walking, or horseback riding through designated areas of the park. NPS estimates that this plan will be finalized in the spring of 2005. Anacostia Park: The Anacostia Park plan contains two alternatives, neither of which was identified as the preferred alternative. The first alternative focuses on enhancement and expansion of current sports facilities, with an emphasis on neighborhood recreation. This alternative proposes a small area at Poplar Point to be designated as a grand waterfront park area. The second alternative proposes enhancements to current sports facilities and natural resources and emphasizes large-scale sports facilities to support regional sporting events. Both alternatives propose access to and additional boating facilities on the Anacostia waterfront. NPS hopes to release this plan to the public for comment in the summer of 2005. There are a number of ways for NPS property to be made available to the District for recreational purposes, including through a transfer of title, transfer of jurisdiction, memoranda of agreement or cooperative agreements, partnerships of public and private entities, and leases. As discussed below, some of these options may be implemented by NPS and the District under existing legislation; others would require enactment of new legislation. In general, title to property allows the owner to possess, control, and assert all rights over that property. Legislation could be enacted, transferring title or requiring such transfer of NPS property to the District. For example, at least two federal laws specifically direct the Department of the Interior to transfer title to land, under NPS jurisdiction, from the federal government to the District. Public Law No. 99-581, enacted in 1986, directed the Secretary of the Interior to convey title to the Robert F. Kennedy Memorial Stadium building to the District. Also, in accordance with the law, the underlying land and associated parking facilities were leased to the District government. The National Children’s Island Act of 1995 directed the Secretary of the Interior to transfer title to portions of two islands in Anacostia Park (Heritage Island and the southern portion of Kingman Island) to the District. The purpose was to facilitate the construction, development, and operation of National Children’s Island, envisioned as a cultural, educational, and family-oriented park. Under a transfer of jurisdiction between the federal government and the District, the transferor retains ownership of the property while the transferee may be given authority to administer and maintain (manage) the property. The federal government has general statutory authority to transfer jurisdiction over park properties that it owns in the District to the District government. Prior to enactment of this authority in 1932, congressional action was required for each transfer between the federal government and the District. According to a report of the House Committee on Public Buildings and Grounds, the purpose of the law was to obviate the need for the Congress to approve each exchange between the federal government and the District, thereby lessening the work of the Congress and resulting in a savings of public funds. Before NPS transfers jurisdiction over a District property, the transfer must be recommended by the National Capital Planning Commission (Commission)—the central federal planning agency for the federal government in the National Capital. The Commission may examine a number of factors in determining whether to recommend a transfer— including whether the proposal is generally reasonable and justified, consistent with the Commission’s Comprehensive Plan, and/or raises any environmental, historical preservation, or other impact concerns. The Commission may also examine public comments prior to its final decision, which may result in approval, approval with stipulations, or disapproval. After a transfer is approved and completed, District authorities are required to report the transfer and associated agreements to the Congress. Official city maps may be changed and records updated after an official exchange of letters of transfer and acceptance between the parties. Some examples of jurisdictional transfers from the federal government to the District government include properties used or to be used for the following: correctional facilities, performing arts. Memoranda of agreement or cooperative agreements are legal instruments that establish a relationship between a federal agency and a state or local government, or other recipient. They are used to transfer a thing of value in carrying out a public purpose of support or stimulation. Substantial involvement is expected between the parties. A 1949 agreement between NPS and the District of Columbia Recreation Board implements authority provided in Public Law No. 77-534 (1942) for making federal lands and recreational facilities located in the District, including those of NPS, available to the District of Columbia Recreation Board to conduct its recreation programs. The agreement lists a number of parks or park areas that may be used by the District of Columbia Recreation Board to carry out various recreation activities, through concession contracts or otherwise, including golf courses in Anacostia Park and Rock Creek Park, tennis courts in East Potomac Park and Rock Creek Park, and boating at West Potomac Park, East Potomac Park, and Columbia Island. NPS is authorized by several statutes to enter into cooperative agreements with the District for management of federal parks. For example, where a federal park is located adjacent to or near a state or local park, and cooperative management would be more efficient, NPS may enter into a cooperative agreement with the state or local government agency in order to cooperatively manage the parks and provide or acquire goods and services. NPS would continue to retain administrative responsibilities over the park, as the law prohibits the transfer of these responsibilities. Another provision of law authorizes NPS to enter into cooperative agreements with and transfer appropriated funds to state and local governments, among other entities, for the public purpose of carrying out NPS programs. A lease is a written contract through which use and possession of property is granted to a person for a specified period of time. NPS has authority to lease certain federally owned or administered property located within the boundaries of park areas. For example, NPS issued a lease to the Friends of Fort Dupont to manage an ice rink at Fort Dupont Park, a federally owned park located in the District. Regulations implementing NPS leasing authority are set out at 36 C.F.R. part 18. The regulations describe, among other things, the determinations NPS must make before leasing property, procedures for awarding leases, and required lease terms and conditions. Unless otherwise authorized by law, a lease may not authorize the lessee to engage in activities that must be authorized through a concession contract, commercial use authorization, or similar instrument. Proposed lease activities are subject to authorization under a concession contract or commercial use authorization if the director of NPS determines, in accordance with regulations and guidance, that the proposed activities meet applicable requirements for issuance of such instruments. A partnership is usually defined as a type of cooperative business relationship; however, cooperative efforts in a public project may also be referred to as a partnership and can be established through legislation. For example, in an effort to coordinate government and private sector activities in the development and implementation of an integrated resource management plan for certain Boston Harbor islands, Congress established the Boston Harbor Islands National Recreation Area, administered by a partnership including representatives from NPS, the U.S. Coast Guard, state and local government offices, and various private sector organizations. The law established the framework for administration of the area, parameters for operation of and membership in the partnership, minimum requirements for the management plan to be submitted to Interior, and establishment of an advisory council to represent interested groups and make recommendations to the partnership. Similarly, NPS properties in the District may be identified with recreational potential that could be maximized through such partnerships. We provided the Department of the Interior with a draft of this report for review and comment. The department provided written comments that are included in appendix VIII. The department provided us with some comments with technical clarifications, which we have incorporated as appropriate. In addressing the condition of properties, the department stated that NPS is diligently implementing an asset management plan that addresses deficiencies and that needed repair and rehabilitation of properties are evaluated and prioritized against the needs of the entire national park system. Projects of greatest need are undertaken in priority order, focusing on critical health and safety and resource protection issues. In addition, the department said that NPS does not have the authority to enter into a lease that allows erection of a structure on such property. While we agree that the agency’s authority is limited, NPS regulations authorize a lease to include a provision allowing for the construction of minor additions, buildings, and other structures as long as it is (1) necessary for the support of authorized lease activities, (2) otherwise consistent with the protection and purposes of the park area, and (3) approved by the director. We will send copies of this report to interested congressional committees; the Secretary of the Interior; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me or Roy Judy at (202) 512-3841. Key contributors to this report are listed in appendix IX. Our study included all federally owned properties that were managed by the National Park Service (NPS) in Washington, D.C., (the District). These properties are under the jurisdiction of NPS’ National Capital Region. To determine the universe of properties managed by NPS in the District, we obtained an electronic file from the National Capital Region that contained all of the properties (reservations) that it manages. Working with NPS officials, we excluded properties outside of the District. This resulted in identifying a universe of 356 properties. To facilitate our analysis and reporting, we grouped the NPS-managed properties into four categories, primarily based on their description as provided on NPS’ Reservation List. (NPS does not have a list of the types of properties under its jurisdiction, nor does it have standard definitions for the properties it manages.) The four categories we used were park and parkways; circles, squares, and triangles; Mall/Washington Monument and grounds; and other. The park and parkways category consists of properties described on NPS’ Reservation List as park, parkway, or combination thereof. Also in this category are several forts and batteries, which are components of Fort Circle Parks. Similarly, the circles, squares, and triangles category includes properties referred to on the Reservation List as physical structures, i.e., circles, squares, or triangles. The Mall and Washington Monument and grounds contain properties identified on the Reservation List as being connected with the Mall, the Washington Monument, or its grounds. Finally, the “other” category consists of various properties described on the Reservation List, as curb/center parking, cemeteries, islands, hills, historical sites, plazas, memorials, annexes, and fountains. We also included in this category Ford’s Theatre and the Mary McLeod Bethune Council House. Appendix II provides a complete listing of the NPS- managed properties in the District and their respective categories. To identify the recreational and sports-related facilities on NPS-managed properties, we developed a data collection instrument (DCI) that we distributed to NPS National Capital Region officials. The DCI contained questions about the existence of recreation facilities, including the number of sports-related facilities; plans for future facilities; and other information about each of the NPS-managed properties. To ensure that data gathered from the DCI were valid and reliable, we performed a number of steps. First, we conducted pretests with National Capital Region officials in two of its management units and revised the DCI in accordance with these pretest results. We next asked the region’s six management units (superintendencies or directors) that have management responsibility for their respective properties to complete one DCI for each of the 356 properties under their respective jurisdiction. We received completed DCIs for 100 percent of the properties. We then reviewed the completed DCIs for obvious errors, such as missing responses to questions and inconsistencies in the answers provided. We also compared the responses with the knowledge we gained from visiting or researching the properties. In addition, we conducted follow-up interviews with region officials to clarify or correct any errors. The data from the hard copy DCIs were key-punched into an electronic data file and checked for accuracy. We used this data file for our analysis. We performed all analysis presented in this report through programming in statistical software, and both the programming and the results were checked for accuracy. We identified various types of recreational facilities, including sports facilities, on the properties. To verify the accuracy of the NPS-reported data, we visited each of the properties that NPS identified as having sports facilities or a combination of sports facilities and other facilities. During these visits, we determined if the property contained the facilities reported to us in the DCI. For the most part, we found that the reported number of facilities on these 25 properties was comparable. However, we found four tennis courts that had not been reported to us—one on a property in East Potomac Park and three in Rock Creek Park. NPS officials acknowledged that they had made a mistake in not reporting these facilities on the DCI. Accordingly, we revised our database to include these four facilities. To determine the condition of the properties with sports facilities as well as the condition of the sports facilities themselves, we relied on information in NPS’ Facility Management Software System (FMSS) and interviews with NPS officials. FMSS is a system that identifies the condition of assets, including properties, based on the asset’s deferred maintenance costs relative to its replacement value. Thus, the higher the deferred maintenance costs, the worse the condition of the asset. Based on NPS criteria, the condition of properties are assessed as good, fair, poor, or serious. We asked NPS to provide FMSS data for each of the 25 properties with sports facilities. NPS provided assessments on the condition of 24 of the 25 properties. It did not provide information on one property due to security issues. We then collected the records in support of the estimated deferred maintenance costs for each of the 24 properties to ascertain if the records contained specific information relative to the condition of the sports facilities on the properties. On the basis of our review, we determined that the maintenance records did not provide sufficient detail to determine the condition of the sports facilities. Since FMSS did not provide sufficient information to determine the condition of the sports facilities, we inspected these facilities to identify any deficiencies, such as cracks in the surface of tennis and basketball courts. For the most part, these facilities appeared to be well maintained. However, we did find some facilities that required maintenance to prevent potential safety hazards. To determine any new or expanded recreational use proposed for NPS- managed properties in general management plans, we asked NPS to identify the plans it had developed for properties that it manages in the District. We were provided final general management plans for Mary McLeod Bethune Council House National Historic Site, the Fort Circle Parks and draft management plans for (1) Rock Creek Park and the Rock Creek Park and Potomac Parkway and (2) Anacostia Park. We reviewed each of these plans to ascertain the specifics regarding new or expanded recreational uses for the properties. We also spoke with NPS officials to identify the properties covered under each plan and to obtain information regarding any updates to these plans and their respective completion dates. We were also provided a general plan for the Chesapeake and Ohio Canal National Historical Park. However, this plan was developed in 1976 and prior to the guidance for what is to be included in a general management plan. Further, it was outdated per NPS’ guidance that general management plans should not exceed 15 years. Thus, we did not include this plan in our analysis. Finally, to identify methods for conveying management responsibility of NPS-managed properties to the District, we reviewed legislation, regulations, legislative histories, and other documents in addition to interviewing NPS officials in the Department of the Interior’s Solicitor’s Office and NPS’ National Capital Region. This appendix provides information on the 356 NPS-managed properties in the District. Table 4 shows the properties by National Capital Region management unit, location description, and property size in acres. This appendix provides information on the 200 NPS-managed properties in the District that are in the circles, squares, and triangles category. Table 5 shows, by National Capital Region management unit, the properties by property number, location description, category type, and size (in acres) associated with each property. This appendix provides information on the 154 NPS-managed properties in the District that do not contain any recreational facilities, including sports facilities. In summary, 116 (75 percent) of these properties were in the circles, squares, and triangles category, but accounted for only about 16 (4 percent) of the 450 total acres. Sixteen of the 154 properties were in the park and parkways category and accounted for 398 (88 percent) of the total acres. The remaining 22 properties were in the “other” category and accounted for 36 of the total acres. Table 6 shows, by National Capital Region management unit, property number, location description, type of property, and size (in acres) associated with each property. This appendix provides information on the 25 NPS-managed properties in the District that contain the sports facilities identified in table 3 of this report. Table 7 shows the properties with sports facilities by National Capital Region management unit, location description, property size (in acres), and the number and types of sports facilities on each property. This appendix provides information on the condition of properties with sports facilities. Each year NPS assesses the condition of the grounds, landscapes, and associated features of federal properties it manages. NPS conducts these condition assessments on a location basis, which may encompass several properties. NPS provided assessments covering 24 of the 25 properties with sports facilities. An assessment was not provided for one property due to security reasons. Table 8 provides information on the results of these assessments. This appendix provides information on the properties covered in general management plans. Table 9 shows the property number, size (in acres), and location associated with properties covered in the Anacostia, Fort Circle Parks, Mary McLeod Bethune, and Rock Creek Park and the Rock Creek and Potomac Parkway general management plans. The following are GAO’s comments on the Department of the Interior’s letter dated April 13, 2005. 1. Fort Circle Parks, Rock Creek Park, Piney Branch Parkway, and Anacostia Park are already included in the “park and parkways” category. A more detailed description concerning the composition of the park and parkways category can be found in appendix I. 2. When formatted, the captions track with their perspective photos. 3. We removed the reference to boat rentals and included information that identifies the specific property so that it could be more readily identified. 4. We clarified the caption at the bottom of the photos to explain that the recreational facilities have not been in use since the property has been closed to the public. In addition to those named above, Diana Cheng, John Delicath, Doreen Feldman, John Johnson, Julian Klazkin, Roselyn McCarthy, and Peter Oswald made key contributions to this report. Mark Braza, Kim Raheb, and Jena Sinkfield made important methodological and graphic contributions to the report.
In recent years, several challenges have emerged concerning future recreational opportunities in the nation's capital. These challenges include ensuring that an adequate supply of parkland and open space is available to meet the needs of an increasing resident population and the estimated 20 million annual visitors to the District of Columbia's cultural institutions, historic sites, parks, and open spaces. GAO identified (1) the universe of federal property in the District of Columbia (the District) managed by the National Park Service (NPS); (2) what recreational facilities, including those that are sports related, exist on these properties; (3) the condition of the properties with sports facilities and the sports facilities thereon; (4) new or expanded recreational uses discussed in NPS general management plans; and (5) the methods that could be used to convey management responsibility for NPS-managed properties to the District government. Commenting on the draft report, Interior stated that NPS is addressing properties in the greatest need of repair or rehabilitation in priority order. It also said that it did not have authority to enter into a lease that allows the erection of a structure on its property. However, GAO believes that existing authority allows the NPS director to approve such leases under certain circumstances. NPS manages 356 federal properties in the District, covering about 6,735 acres of land. Most of the properties are what NPS refers to as circles, squares, and triangles, and are less than 1 acre in size. The second largest total number of properties are parks and parkways, which represent about 93 percent of the total acreage for the 356 properties. NPS officials reported to GAO that 202 properties it manages in the District had various recreational facilities such as park benches, outdoor grills, and picnic tables or shelters. Of the 202 properties, 25 had 205 sports facilities, such as basketball and tennis courts and baseball and softball fields. Most of the properties with sports facilities were in good or fair condition, according to NPS deferred maintenance records, but information on the condition of individual sports facilities is limited. While we did not have criteria to determine the condition of sports facilities, we inspected each of the 205 sports facilities to identify obvious deficiencies, such as cracks in the surface area of tennis and basketball courts. Based on our observations, most of the facilities appeared to be well maintained, but we found some sports facilities had conditions that posed a potential safety risk. NPS has developed four general management plans--two finalized and two in draft. These plans identify some opportunities for new or expanded recreation, such as rehabilitating selected baseball and softball fields and basketball and tennis courts; creating a regional sports complex; and developing new hiking trails. For example, one of the plans calls for the creation of a new trail that connects Forts Dupont, Totten, Stevens, Reno, and others as part of the Fort Circle Parks system. Options available for transferring management responsibilities for NPS properties located in the District to the District city government include transfer of title, transfer of jurisdiction, memoranda of agreement or cooperative agreements, leases, and partnerships with public or private entities. Some of the options would require enacting new legislation while others may be exercised by NPS and the District under existing legislation.
FCC was established by the Communications Act of 1934 (Communications Act). The Communications Act, as amended, specifies that FCC was established for the ‘‘the purpose of regulating interstate and foreign commerce in communications by wire and radio so as to make available, so far as possible, to all the people of the United States . . . a rapid, efficient, Nation-wide, and world-wide wire and radio communication service with adequate facilities at reasonable charges, for the purpose of the national defense for the purpose of promoting safety of life and property through the use of wire and radio communication.” FCC is responsible for, among other things, making available rapid, efficient, nationwide, and worldwide wire and radio communication services at reasonable charges and on a nondiscriminatory basis, and more recently, promoting competition and reducing regulation of the telecommunications industry in order to secure lower prices and high-quality services for consumers. FCC established six strategic goals to support its mission: 1. Promote access to robust and reliable broadband products and services for all Americans. 2. Promote a competitive framework for communications services that support the nation’s economy. 3. Facilitate efficient and effective use of nonfederal spectrum to promote growth and rapid development of innovative and efficient communications technologies and services. 4. Develop media regulations that promote competition, diversity, and localism and facilitate the transition to digital modes of delivery. 5. Promote access to effective communications during emergencies and crises and strengthen measures for protecting the nation’s critical communications infrastructure. 6. Strive to be a highly productive, adaptive, and innovative organization that maximizes the benefit to stakeholders, staff, and management from effective systems, processes, resources, and organizational culture. FCC’s basic structure is prescribed by statute. FCC is composed of five commissioners, appointed by the President and approved by the Senate to serve 5-year terms; the President designates one member to serve as chairman. No more than three commissioners may come from any one political party. The commission has flexibility in how it creates and organizes divisions or bureaus responsible for specific work assigned. Specifically, the Communications Act, as amended, requires the commission to organize its staff into (1) integrated bureaus, to function on the basis of the commission’s principal workload operations, and (2) such other divisional organizations as the commission deems necessary. FCC currently consists of seven bureaus that are responsible for a variety of issues that affect consumers and the telecommunications industry, including analyzing complaints, licensing, and spectrum auctions, and 10 offices that provide support services for the bureaus and commission. Appendix II has a detailed description of each bureau and office. Each bureau is required by statute to include the legal, engineering, accounting, administrative, clerical, and other personnel that the commission determines necessary to perform its functions. FCC has identified attorneys, engineers, and economists as the agency’s main professional categories. Although FCC has staff offices with concentrations of each profession (attorneys in the Office of General Counsel, engineers in the Office of Engineering and Technology, and economists in the Office of Strategic Planning and Policy Analysis), these professions are also integrated into the bureaus. Under the Communications Act, as amended, FCC has broad authority to execute its functions. The act, as amended, is divided into titles and sections that describe various powers and concerns of the commission, with different titles describing the laws applicable to different services. For example, there are separate titles outlining the specific provisions for telecommunications services and for cable services. This statutory structure created distinct regulatory “silos” that equated specific services with specific network technologies. However, technological advances in communications infrastructure have led to a convergence of previously separate networks used to transmit voice, data, and video communications. For example, telephone, cable, and wireless companies are increasingly offering voice, data, and video services over a single platform. FCC is charged with carrying out various activities, including issuing licenses for broadcast television and radio; overseeing licensing, enforcement, and regulatory functions of cellular phones and other personal communication services; regulating the use of radio spectrum and conducting auctions of licenses for spectrum; investigating complaints and taking enforcement actions if it finds that there have been violations of the various communications laws and commission rules that are designed to protect consumers; addressing public safety, homeland security, emergency management, and preparedness; educating and informing consumers about communications goods and services; and reviewing mergers of companies holding FCC-issued licenses. The Telecommunications Act also expanded FCC’s responsibilities for universal service beyond the traditional provision of affordable, nationwide access to basic telephone service to include eligible schools, libraries, and rural health care providers. Two major laws that affect FCC’s decision-making process are the Government in the Sunshine Act of 1976 (Sunshine Act) and the Administrative Procedure Act of 1946. Government in the Sunshine Act of 1976: The Sunshine Act applies to agencies headed by collegial bodies. Under the Sunshine Act, FCC is required to provide sufficient public notice that the meeting of commissioners will take place. The agency generally must also release the meeting’s agenda, known as the Sunshine Agenda, no later than 1 week before the meeting. In addition, the Sunshine Act prohibits more than two of the five FCC commissioners from deliberating with one another to conduct agency business outside the context of the public meeting. Administrative Procedure Act of 1946: The Administrative Procedure Act (APA) is the principal law governing how agencies make rules. The law prescribes uniform standards for rulemaking, requires agencies to inform the public about their rules and proposed changes, and provides opportunities for public participation in the rulemaking process. Most federal rules are promulgated using the APA-established informal rulemaking process, which requires agencies to provide public notice of proposed rule changes, as well as provide a period for interested parties to comment on the notices—hence the “notice and comment” label. The notice and comment procedures of the APA are intended to encourage public participation in the administrative process, to help educate the agency, and thus, to produce more informed agency decision making. Experts have noted that public participation promotes legitimacy by creating a sense of fairness in rulemaking, and transparency helps both the public and other branches of government to assess whether agency decisions are in fact being made on the grounds asserted for them and not on other, potentially improper, grounds. APA does not generally address time frames for informal rulemaking actions, limits on contacts between agency officials and stakeholders, or requirements for “closing” dockets. FCC implements its policy initiatives through a process known as rulemaking, which is the governmentwide process for creating rules or regulations that implement, interpret, or prescribe law or policy. When developing, modifying, or deleting a rule, FCC relies on public input provided during the rulemaking process. Before beginning the rulemaking process, FCC may issue an optional Notice of Inquiry (NOI) to gather facts and information on a particular subject or issue to determine if further action by the FCC is warranted. Typically, an NOI asks questions about a given topic and seeks comments from stakeholders on that topic. If FCC issues an NOI, it must issue a Notice of Proposed Rulemaking (NPRM) before taking final action on a rule, unless an exception to notice and comment rulemaking requirements applies. FCC issues NPRMs to propose new rules or to change existing rules, and the issuance of an NPRM signals the beginning of the rulemaking process. The NPRM provides an opportunity for the stakeholders to submit their comments on the proposal and to reply to the comments submitted by other stakeholders. A summary of the NPRM is published in the Federal Register and announces the deadlines for filing public comments and reply comments. The NPRM also indicates the rules for ex parte communications between agency decision makers and other persons during the proceeding. An ex parte presentation discusses the merits or outcome of a proceeding, and if written, is not served on all the parties to a proceeding, and if it is oral, it is made without advance notice to the parties or an opportunity for them to be present. FCC generally classifies its rulemaking proceedings as “permit-but-disclose” proceedings, in which ex parte presentations to FCC officials are permissible but subject to certain disclosure requirements. Generally, the external party must provide two copies of written presentations to be filed in the public record. If an external party makes an oral ex parte presentation that presents data or arguments not already reflected in the party’s written comments or other filings in that proceeding, then the external party must provide FCC’s Secretary with an original and one copy of a summary of the new data or arguments to be filed in the public record. Once FCC places an item on the Sunshine Agenda, which lists the items up for a vote at the next open commission meeting, ex parte contacts are restricted, with several exemptions. In addition, FCC provides the stakeholders the ability to submit electronic comments via the FCC Web site. After reviewing the comments received in response to an NPRM, the FCC may issue a Further Notice of Proposed Rulemaking (FNPRM) seeking additional public comment on specific issues in the proceeding. Following the close of the reply and comment period, FCC officials may continue discussing the issue with external parties through ex parte presentations. Staff in the bureaus assigned to work on the order begin developing and analyzing the public record and the information provided in ex parte contacts to propose an action for the commission to vote on, such as adopting final rules, amending existing rules, or stating that there will be no changes. The chairman decides when the commission will vote on final rules and whether the vote will occur during a public meeting or by circulation, which involves electronically circulating written items to each of the commissioners for approval. See figure 1 for an illustration of the steps in FCC’s rulemaking process. Although FCC has established some function-based bureaus and reorganized its bureaus to reflect some changes in the telecommunications market, further evolutions and the growth of new technologies have continued to create crosscutting issues that span several bureaus. FCC’s bureaus are still somewhat structured along the traditional technology lines of wireless, wireline, satellite, and media, despite the fact that one company may provide services that span such distinctions or that competing services may be regulated by different bureaus. Since the Telecommunications Act, chairmen have made changes to FCC’s bureau structure. In 1999 Chairman Kennard issued A New FCC for the 21st Century, which called for reorganizing FCC’s bureau structure along functional, rather than technological, lines in order to carry out FCC’s core responsibilities more productively and efficiently. Subsequently, FCC consolidated enforcement functions and personnel from the Common Carrier, Mass Media, Wireless Telecommunications, and Compliance and Information Bureaus into a new Enforcement Bureau. In addition, FCC consolidated consumer complaint and public information functions of the various bureaus into a Consumer Information Bureau. Chairman Powell also issued a reorganization plan to promote a more efficient, responsive, and effective organizational structure. This reform and reorganization plan included creating three new bureaus and one new office. FCC consolidated the Mass Media Bureau and Cable Services Bureau into a new overarching Media Bureau, and restructured the Common Carrier Bureau and renamed it the Wireline Competition Bureau. Additionally, the Consumer Information Bureau was given increased policy making and intergovernmental affairs responsibilities and was renamed the Consumer and Governmental Affairs Bureau. Finally, the Office of Strategic Planning and Policy Analysis subsumed the Office of Plans and Policy. In 2006, under Chairman Martin, FCC established the Public Safety and Homeland Security Bureau, consolidating existing public safety and homeland security functions and issues from the Enforcement, Wireless Telecommunications, Wireline Competition, and Media Bureaus, and the offices of Engineering and Technology, Strategic Planning and Policy Analysis, and Managing Director. Figure 2 shows FCC’s current structure and how the bureaus and offices have changed since the Telecommunications Act. Despite these changes in FCC’s organizational structure, the changing telecommunications market and the development of new technologies have created new issues that span several bureaus. For example, broadband services—which became available in the late 1990s—do not fall exclusively within the jurisdiction of a particular FCC bureau or regulatory category. As a result, FCC created broadband regulations in a piecemeal fashion, issuing four separate orders (one for cable modems, one for facilities-based wireline broadband Internet access, one for broadband over power line, and one for wireless broadband Internet access) to regulate competing methods of providing broadband services by the same standard. The Telecommunications Act allows FCC to classify services as telecommunications services or information services, the latter being subject to fewer regulatory restrictions. In 2002, FCC determined that cable modem service should be categorized as an information service. Three years after FCC issued the cable modem order and shortly after the Supreme Court upheld FCC’s regulatory classification for cable modem service, FCC adopted an order that granted providers of facilities-based wireline broadband Internet access the same regulatory classification and treatment as cable modem Internet access providers. In November 2006, FCC issued an order classifying broadband over power line-enabled Internet access service as an information service. In March 2007, FCC issued an order classifying wireless broadband Internet access as an information service. In addition, as companies that once provided a distinct service (such as cable and telephone companies) have shifted to providing bundles of services (voice, video, and data services) over a broadband platform, new debates have arisen regarding how rules previously intended for a specific industry and service (such as universal service, customer retention rules, and video franchising rules) should be applied to companies now providing multiple services. FCC officials told us they are currently looking across the agency to identify challenges that convergence poses to the existing structure and will first focus on how FCC’s systems, such as its data collection efforts, can be modified to address these challenges, but they may consider structural changes later. According to agency officials, FCC uses informal interbureau collaboration, working groups, and task forces to address convergence and crosscutting issues, but FCC lacks written policies outlining how interbureau coordination and collaboration is to occur. FCC handles convergence by holding interbureau meetings to discuss the progress of items and to address upcoming issues. When a crosscutting item requires the input of multiple bureaus or offices, one is considered the “lead” and is responsible for coordinating with all other bureaus or offices that have a direct concern or interest in the document and ensuring they have the opportunity to review and comment on an agenda item prior to submission to the commission. Generally, if a proceeding (such as a petition or draft order) clearly falls under a specific bureau’s purview, that bureau will serve as the lead on the issue. The determination of the lead bureau is made by each bureau’s management or by the precedence of which bureau handled a particular issue in the past. For example, the Wireless Telecommunications Bureau would be the lead for items regarding licensed spectrum rules because it has handled these issues in the past. FCC officials told us that on more complex issues, or items that do not have an evident lead bureau, the chairman is ultimately responsible for selecting the lead bureau. Although FCC relies on this interbureau coordination, it does not provide specific steps or guidance regarding how or when this coordination is to occur, with some limited exceptions. FCC officials confirmed that there are no written policies outlining how the bureaus should coordinate with one another. FCC’s lack of written policies and its reliance on informal interbureau coordination to address issues that span beyond the purview of a single bureau can result in inefficiencies. For example, one FCC official told us that while FCC was conducting a merger review of two major media companies, the review process was delayed because of confusion regarding which bureau was responsible. Since each of the companies merging had assets regulated by different FCC bureaus, it was unclear which bureau was the designated lead and would be responsible for a specific portion of the merger review process. Although the chairman eventually designated a lead bureau, the time it took for this to happen slowed down the process, and the overall lack of coordination made the process less efficient. Our Internal Control and Management Evaluation Tool emphasizes the importance of internal communications, specifically noting the need for mechanisms that allow for the easy flow of information down, across, and up the organization, including communications between functional activities. In addition, the absence of written policies allows interbureau collaboration and communication to vary from chairman to chairman. FCC officials noted significant differences between prior chairmen’s emphasis on bureau interaction. For example, former Chairman Kevin Martin required staff to seek approval from management before contacting other bureau and office staff. Current and former FCC officials told us that such policies limited interbureau collaboration and staff-to-staff communication. By contrast, then-Acting Chairman Copps instituted a weekly Chairman’s Office Briefing with bureau and office chiefs, or their designees, and a representative from each of the commissioners’ offices with the stated intent of promoting openness, a practice that continues under Chairman Genachowski. In addition, an FCC official told us that under Chairman Powell, FCC had a memorandum outlining how one bureau was to note its concurrence or disagreement with a draft order prepared by another bureau, but that the practice largely lapsed under Chairman Martin. The lack of written policies also allows the chairman complete discretion when assigning bureau staff to address an item, leading to instances where all relevant staff were not included in developing an item. For example, according to FCC officials, the Wireless Telecommunications Bureau was not included in drafting a universal service order that increased the portion of universal service funding provided by wireless customers. FCC officials told us the resulting order did not fairly characterize the wireless industry’s prior efforts, which led the industry to file reconsideration petitions that required additional time to address. Other officials told us that in 2008, FCC received filings in its Wireline Competition Bureau and its Enforcement Bureau regarding allegations that Comcast was discriminating against customers using peer-to-peer sharing protocols to exchange videos. FCC officials told us that then-Chairman Martin directed the Office of General Counsel to draft a resolution without coordinating or discussing the issue with the other bureaus and that this caused uncertainty in the Enforcement Bureau regarding how to address pending complaints. FCC officials and outside stakeholders stated that communication among bureaus is necessary for addressing convergence and other crosscutting issues under the current bureau structure. Three FCC officials told us that convergence in the telecommunications market requires FCC’s bureaus to actively communicate with one another so they can address issues that span multiple bureaus. One of these officials also noted that convergence makes active communication among bureaus even more important because if communication fails or does not take place, issues might inadvertently not be addressed before the information is presented to the commissioners and their staff. FCC’s functional offices, such as the Office of Engineering and Technology (OET) and the Office of Strategic Planning and Policy Analysis (OSP), provide a broader scope than the platform-based bureaus and address some of the issues posed by convergence, but the chairman’s influence can affect FCC’s ability to use these offices to address crosscutting issues. With regard to OET, stakeholders, including commissioners and trade associations, have raised concerns about whether the chairman’s authority over office staff impacts OET’s ability to provide independent expertise. Two commissioners told us that although OET had high-quality staff, the commissioners question whether the information OET provides is impartial, since all bureau and office chiefs report to the chairman. One of the commissioners emphasized that without reliable unbiased information, it can be difficult to make good decisions on scientific and technical questions. Additionally, three trade associations also expressed concern about the independent nature of OET, with one indicating that there is no way to tell if the information coming from OET is independent of the chairman or the best of several options. Similarly, the emphasis FCC places on OSP and the work it does varies according to the chairman, and in recent years, OSP’s output has diminished. OSP, working with the Office of Managing Director, is responsible for developing a strategic plan identifying short- and long-term policy objectives for the agency; working with the chairman to implement policy goals; and acting as expert consultants in areas of economic, business and market analysis, and other subjects that cut across traditional lines, such as the Internet. One former chief economist told us that each chairman has discretion over how he will use OSP, and therefore, the role of the office in providing economic analyses will depend on whether the chairman values economic research. Another former chief economist noted that FCC’s emphasis on economic analysis depends on the chairman’s preferences. OSP is responsible for producing publicly available work papers that monitor the state of the communications industry to identify trends, issues, and overall industry health. However, OSP did not release any working papers between September 2003 and February 2008 and has not released any working papers since issuing three in February 2008. Given OSP’s responsibility in developing a strategic plan that identifies short- and long-term policy objectives for the agency, a lack of research can put FCC at a distinct disadvantage in preparing for the future. To address these issues, some stakeholders we spoke with suggested that adding more resources to OSP or creating a separate economics bureau would allow for more independent and robust economic analysis. One former chief economist told us that although the research function of FCC is under OSP’s purview, OSP does not have the resources needed, and providing additional resources would help them produce more independent and higher-quality analyses. A former chairman expressed similar concerns about OSP’s resources. Two other former chief economists suggested that if economists were centralized in one group or office, then economic analysis would have greater influence in the decision-making process. Similarly, a researcher found that another independent regulatory agency’s use of an independent and centralized Bureau of Economics leads to routine use of cost-benefit analysis during its investigations and enforcement actions. Finally a trade association told us that OSP has always been on the periphery of the policy-making process because it lacks the budget and staff levels to complete comprehensive industry analysis, and that OSP needs additional resources to perform more useful policy analysis. While some stakeholders have suggested consolidating economists in a centralized bureau, others have noted the need to maintain economic expertise within the bureaus. Officials from each bureau we spoke with told us having economists imbedded in each bureau was useful because it allows the bureaus to access economic expertise more easily. For example, economists may lead teams on particular issues, review mergers, gather subscriber data, create economic development policies, manage industry reporting, and produce economic reports and information, and a bureau’s ability to function could suffer if the economists were taken out of the bureau. One study that examined organizational structures for attorneys and economists in enforcement agencies found that having economists and attorneys working together in the same division and organized around a particular industry or sector, as they do at FCC, is advantageous for a number of reasons. The study found the main advantage of this structure is that it focuses economic analysis on the questions of interest to the ultimate decision makers. Additionally, the strong links between economists and attorneys working in the same division help to ensure that economists are answering all the legally relevant questions and the decision makers can direct the efforts of economists to answer the questions that concern them. However, these arguments do not necessarily preclude the need to examine OSP’s role and determine whether it is able to address the economic implications of broad policy issues. Several stakeholders have proposed a variety of options for restructuring FCC. One proposal is to replace industry-based bureaus with bureaus divided along functional goals. Some stakeholders have expressed concerns that FCC’s current bureau structure may lead to bureaus identifying with the industry they regulate, rather than taking an overarching view of an issue. One trade group representative and a former FCC chairman stated that this leads to “fiefdoms,” where the staff members begin to act more like advocates for the industry they are regulating than as experts looking for the best decision. In addition, stakeholders stated that the culture of the bureaus may vary—depending on their history and the industry they regulate—and that this could create problems if competing services are treated differently based on which bureau is responsible for regulating the service. In response to such concerns, some stakeholders suggested that FCC create new functional bureaus that focus on areas that span a variety of service providers and industries, such as competition, licensing, and spectrum management. For example, one former FCC official suggested that FCC could create one bureau to handle spectrum management issues, which are currently divided among the Wireless Telecommunications Bureau, the Office of Engineering and Technology, the International Bureau, and the Public Safety and Homeland Security Bureau. Another stakeholder suggested FCC structure bureaus along overarching policy goals, such as culture and values (which would include broad issues such as obscenity, advertising rules, and public broadcasting) and markets (which would include allocation of spectrum, competition, and market analysis). The stakeholder stated that by reorganizing along such lines, FCC would create departments with technology and industry-neutral responsibilities for key social mandates, which would better enable FCC to address issues that span industry lines. However, a number of stakeholders and FCC officials expressed caution when discussing restructuring or reforming the bureaus. Restructuring is often resource-intensive and disruptive for an agency and can impact staff morale. In addition, it is unclear whether restructuring the bureaus would improve FCC’s ability to regulate these industries, since the Communications Act, as amended, establishes different regulatory regimes based on how a service is provided. Some industry and FCC stakeholders we interviewed also noted that in some cases, the current bureau structure works well, such as when issues fall within a specific bureau’s purview. For example, one FCC official noted that in some cases, it is useful to have various functions housed in a specific industry-based bureau, explaining that since rulemaking and licensing functions are housed in the Wireless Telecommunications Bureau, bureau staff understand the implications of administering the licensing rules made during the rulemaking process. Similarly, FCC officials stated that the industry-based bureaus allow staff to develop in-depth expertise on an issue. For example, an FCC official stated that the Media Bureau’s video division staff understand how to address most broadcast licensing and market issues and that splitting up the staff could result in a loss of group cohesion and institutional knowledge. Regardless of the organizational structure FCC decides to pursue, it is certain that technological advances and marketplace changes will contribute to an evolving regulatory landscape for the commission. To anticipate and quickly respond to these changing conditions, FCC will need mechanisms to ensure that staff can routinely and reliably coordinate and communicate across bureaus in order to foster and harness FCC’s collective knowledge on issues that span the bureaus. The absence of written policies outlining how bureaus should communicate and collaborate on crosscutting issues has led to inefficiencies in FCC’s decision-making process by leaving the extent to which interbureau collaboration occurs subject to the preferences of the chairman. FCC chairmen have varied in their policies regarding commissioner access to bureau and office analyses during the decision-making process. For example, then-Acting Chairman Copps publicly stated that commissioners would have unfettered access to the bureaus, adding that bureaus should respond to requests from commissioners’ offices directly and as quickly as possible, without preapproval from the chairman’s office. In addition, former Chairman Kennard established internal procedures outlining how commissioners should receive information from bureaus and offices during the decision-making process. These procedures specified that bureau and office chiefs would provide detailed oral briefings or memoranda on upcoming items upon the request of commissioners and would solicit feedback from commissioners while developing draft items. Under Chairman Martin, there was a perception among some FCC commissioners and staff that the commissioners could not easily access bureau and office analyses. Stakeholders also told us that some previous chairmen had similarly limited commissioner access to bureau and office analyses. One rationale behind such policies was that giving the commissioners unrestricted access to agency staff could hinder the decision-making process by allowing commissioners to search for support among the bureau staff for any given position. Similarly, some stakeholders expressed concerns about providing commissioners full access to bureau staff. For example, one FCC official recounted prior instances in which commissioners requested information that placed bureau staff in the middle of commission-level policy disputes, and a former FCC official expressed concerns about commissioners making requests that could tie up bureau resources. No explicit statutory or regulatory language exists that outlines commissioners’ access to internal information. The Communications Act, as amended, states that it is the duty of the chairman to coordinate and organize the work of the commission in such a manner as to promote prompt and efficient disposition of all matters within the jurisdiction of the commission. In implementing this, FCC’s chairman sets the agency’s agenda by directing the work of the bureaus and offices to include drafting agenda items for commission consideration. While FCC’s Agenda Handbook does specify that the bureaus and offices should provide commissioners copies of draft items for consideration and editing 3 weeks before the commission votes on the item at a public meeting, it does not specify the extent to which commissioners have access to the bureau and office staff and their analyses, including their ability to ask the staff questions about draft items or the analyses supporting those items. The absence of internal policies or statutory requirements has enabled each chairman to define how and when other commissioners receive bureau and office analyses during the decision-making process. Controlling commissioner access to staff analysis and opinions may subvert the commission decision-making process and raises concerns among FCC officials and external stakeholders regarding the transparency and informed nature of the decision-making process. Many stakeholders we interviewed, including former FCC officials and current FCC commissioners and bureau officials, noted the importance of bureau analyses to the commission’s decision-making process, with some stating that commissioners’ lack of access to bureau analyses can negatively impact the quality of FCC’s decisions. Two bureau officials explained that providing commissioners access to information improves FCC’s decisions by allowing for more informed deliberations. FCC officials also told us that in situations where commissioners are unable to access information from the bureaus and offices, commissioners may refuse to vote on an item, thereby delaying decision making. The ability of the chairman to exert control over the bureau and office analyses provided to commissioners has raised concerns as to whether the information provided reflects the bureaus’ and offices’ independent analyses or the chairman’s position on an issue. In addition, a current and a former commissioner stated that the chairman’s ability to influence what information FCC staff provided to commissioners increased the commissioners’ reliance on outside sources of information. The former commissioner noted that this raises concerns about the quality of information the commissioners may rely on and the transparency of the decision-making process, since private groups may be providing data that supports a particular agenda. Regulatory bodies headed by multimember commissions, such as FCC, are often advocated and preferred over a department or agency headed by a single administrator because group decision making under conditions of relative independence is preferable to dominance by a single individual. For example, a major review of independent regulatory agencies concluded that a distinctive attribute of commission action is that it requires concurrence by a majority of members of equal standing after full discussion and deliberation, and that collective decision making is advantageous where the problems are complex, the relative weight of various factors affecting policy is not clear, and the choices are numerous. Another study promoted the use of the commission structure for FCC in particular, stressing that the commission prevents a single administrator from having undue influence over the sources of public information. We have also recognized the need to provide decision makers with the information needed to carry out their responsibilities. Our internal control standards state that information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their responsibilities. We also reviewed the policies of other independent regulatory agencies with regard to commissioner access to staff analyses. Officials at the Federal Energy Regulatory Commission and the Federal Trade Commission (FTC) told us that they do not have formal policies ensuring commissioner access to information, but stated that commissioners have not experienced problems obtaining information in the past. For example, an FTC official told us that that the commission has had a long-standing practice that the commissioners have access to all of the information needed to perform their duties. However, the Nuclear Regulatory Commission (NRC) is statutorily required to ensure that commissioners have full access to the information needed to perform their duties and that commissioners share “equal responsibility and authority in all decisions and actions of the commission.” In implementing this policy, NRC has developed and made publicly available its decision-making procedures, including commissioners’ rights to information. These procedures outline the responsibilities of the chairman and the commissioners, how commissioners receive items from commission staff, and how items are voted on. Some of the key ways in which NRC’s procedures provide commissioners access to information include: Requiring that draft and final analyses by NRC staff are simultaneously provided to all commissioners, including the chairman. Establishing that each commissioner, including the chairman, has equal responsibility and authority in all commission decisions and actions, and has full and equal access to all agency information pertaining to commission responsibilities. Balancing commissioner access to staff analyses with the ability of the chairman to direct resource expenditures. For example, although individual commissioners can request information or analyses from NRC staff, if the request requires significant resources to fulfill and questions of priority arise, the office or the commissioner can request the chairman resolve the matter. If the chairman’s decision is not satisfactory to the requesting commissioner or the office, either can bring the matter for a vote before the full commission. NRC officials told us that these long-standing internal procedures, which are reviewed approximately every 2 years, have been helpful in avoiding protracted disputes over the prerogatives and responsibilities of the chairman and the other commissioners and ensuring that access issues are handled consistently. When issuing an NPRM to gather public input before adopting, modifying, or deleting a rule, FCC rarely includes the text of the proposed rule in the notice, which may limit the effectiveness of the public comment process. A 2008 FCC draft order noted that during the period 1990 through 2007, the commission issued approximately 3,408 NPRMs, 390 (or 11.4 percent) of which contained the text of proposed rules under consideration. According to A Guide to Federal Agency Rulemaking, a resource guide created by the Administrative Law and Regulatory Practice and Government and Public Sector Lawyers Division of the American Bar Association, “most agencies publish the text of the proposed rule when commencing rulemaking, and some enabling statutes expressly require that the agency do so.” Widespread concern exists regarding the lack of details provided in FCC’s NPRMs, which generally ask for comment on wide-ranging issues, making the NPRM more like a Notice of Inquiry (NOI). FCC officials told us that FCC uses NPRMs rather than NOIs (the traditional method of gathering broad input on a topic) so that it can proceed directly to issuing a rule once one is developed. By contrast, if FCC used an NOI to gather information, then it would need to issue an NPRM before issuing a rule. Several stakeholders have stated that such broad NPRMs limit their ability to submit meaningful comments that address FCC’s information needs and increase FCC’s reliance on information provided in ex parte contacts. For example, the Small Business Administration’s (SBA) Office of Advocacy noted its concerns about FCC’s use of NPRMs instead of NOIs to collect broad information on a number of issues. It argues that by issuing an NPRM that lacks specific proposals, the FCC creates uncertainty in the industry, resulting in thousands of comments that can only speculate as to what action the FCC may take and the potential impacts. SBA’s Office of Advocacy adds that small businesses, in particular, are often overwhelmed by the scope of a vague NPRM and cannot contribute meaningfully to the rulemaking process. In addition, part of the value of the public comment process is derived from external stakeholders’ ability to respond to other groups’ comments, thereby improving the public debate on an item. However, if parties are unsure of FCC’s intentions due to a lack of specificity in the NPRM and they submit general comments or wait until the ex parte process to provide input on an item, public debate can be limited. The APA requires that an NPRM include “either the terms or substance of a proposed rule or a description of the subjects and issues involved.” Since the public is generally entitled to submit their views and relevant data on any proposals, the notice must be sufficient to fairly apprise interested parties of the issues involved, but it need not specify every precise proposal which the agency may ultimately adopt as a rule. APA’s requirements are satisfied when the rule is a “logical outgrowth” of the actions proposed, which means that interested parties “should have anticipated the agency’s final course in light of the initial notice.” Although APA does not specifically require that NPRMs contain proposed rule text, some studies of federal rulemaking have identified the benefits of providing proposed rule text for public comment. For example, A Guide to Federal Agency Rulemaking notes that “specific proposals help focus public comment, and that, in turn, assists reviewing courts in deciding whether interested persons were given a meaningful opportunity to participate in the rulemaking … a focused and well-explained NPRM can educate the public and generate more helpful information from interested persons.” Similarly, in its analyses of transparent governing and public participation in the rulemaking process, ICF International recommended that agencies garner more substantive public comments by issuing an Advanced Notice of Proposed Rulemaking that lays out specific options under consideration and asks specific questions that are linked to a Web form. FCC’s current ex parte process can lead to vague or last-minute ex pa summaries of meetings between FCC and external officials. The APA places no restriction on ex parte communication between agency decision makers and other persons during informal rulemaking. However, FCC ha rules about such contacts that are intended to protect the fairness o proceedings by providing an assurance that FCC decisions are not influenced by off-the-record communications between decision makers and others. Stakeholders must provide FCC with two copies of written ex parte presentations and the original and a copy of a summary of the new information provided during oral ex parte contacts to be filed in the public record. FCC places the burden of preparing and ensuring that an ex parte summary is complete on the external party. FCC’s ex parte rules provide general guidance on what is sufficient, stating that the summaries should generally be “more than a one or two sentence description” and not just a listing of the subjects discussed. When it is unclear whether da ta or arguments presented in an ex parte contact are already in the public record, FCC advises that parties briefly summarize the matters disc at the meeting. FCC officials told us that they are in the pr reviewing and potentially changing the ex parte process. However, stakeholders expressed concerns about the submission of vague ex parte summaries under the current process. For example, an ex parte summary may simply state that an outside party met with FCC offici als to share its thoughts on a proceeding. Stakeholders told us that vague ex parte summaries reduce transparency and public discourse in FCC’s decision-making process by limiting stakeholders’ ability to determine what information was provided in the meeting and to discuss or rebut tha information. In 2002, an FCC commissioner stated that she believed that the “cursory filings that routinely permits” are an apparent violation of its rules requiring more than a one or two sentence description. Similarly, a former acting chairman noted the need to “enhance, or at least enforce,” FCC’s ex parte rules so that the public will find more than a brief ex parte letter that only identifies who attended a meeting, rather than what was said in the meeting. According to FCC, the ex parte process is an important avenue for FCC in ess. collecting and examining information during the decision-making proc FCC has previously told us that it generally does not produce its own studies to develop a rule. Rather, FCC relies on stakeholders to submit information and analysis that is then placed in the docket so that FC other stakeholders can critique the information. According to FCC officials, this results in both transparency and quality information because each stakeholder has had an opportunity to review and comment on all of the information in the docket. In addition, according to an official in FCC’s Office of General Counsel, ex parte meetings allow stakeholders and F to focus on specific issues of interest to FCC and to identify potentia weaknesses in the existing arguments. An official in FCC’s Office of General Counsel recognizes concerns that some ex parte summaries are cursory and vague and noted that to address this, FCC periodically sends reminders to commenters regarding the information required in ex pa rte summaries and has placed additional information about the required information on FCC’s Web site. In 2000, FCC issued a public notice reiterating the public’s responsibilities in the ex parte process. This notic stated “the duty to ensure the adequacy of ex parte notices …rests with the person making the presentation. Staff members have the disc request supplemental filings if they feel that the original filing is retion to inadequate, but the obligation to file a sufficient notice must be satisfied regardless of possible requests by the staff.” FCC does not proactively determine whether the content of the summaries is sufficient. Specifically, FCC relies on a complaint-driven process to ensure that ex parte submissions comply with FCC’s rules. FCC’s Office of General Counsel reviews ex parte communications if it receives a complaint. However, since the parties not present at the meeting are generally unsure as to what occurred, it is difficult for external stakeholders to determine whether an ex parte submission is sufficiently detailed. In addition, it can be difficult to determine if an ex parte summary is sufficient, because if a party is simply restating information it has already presented, then it can file a short ex parte summary or none at all. After the Office of General Counsel receives a complaint, it provides copies to the party referred to in the complaint and to the FCC staff present during the meeting, and the parties provide a written response to the office about their version of events. The Office of General Counsel is responsible for determining whether the issue has been appropriately resolved. FCC receives, on average, one complaint a month about ex parte communications. Other aspects of the ex parte process can challenge stakeholders’ ability to submit information during FCC’s decision-making process. For example, one group noted that unlike public comments, which must be submitted by a specific deadline, the ex parte process does not have a definitive end date and groups must expend their resources tracking ex parte submissions until the relevant item is voted on by the commission. In addition, stakeholders must attempt to determine what information was provided based on summaries of the ex parte meeting and submit written responses or attempt to meet with FCC officials to offer a countervailing viewpoint. This can present a particular burden for stakeholders with limited resources for tracking and responding to ex parte contacts. For example, two organizations told us that it is more difficult for groups that must travel to Washington, D.C., to participate in person at ex parte meetings than for groups with a presence inside Washington. One organization told us of instances in which FCC canceled meetings with them at the last minute, after the group traveled from outside of Washington, D.C, to meet with FCC. Several stakeholders also raised concerns regarding prior incidents in which parties made substantive ex parte submissions just before or during the Sunshine period, during which external contact with FCC officials is restricted, and thus, other groups are unable to respond to the information provided. Although, subject to certain exceptions, external parties are forbidden from contacting FCC officials after release of the Sunshine Notice (generally 1 week prior to a vote), FCC officials are allowed to initiate contact with external parties for additional information on an agenda item. This can lead to ex parte submissions affecting decisions without allowing for public comment on the information provided. For example, during the AT&T BellSouth merger review, an ex parte communication occurred the day before the scheduled vote. During the communication, FCC proposed merger conditions and the ex parte summary was filed the day of the proposed vote, thus preventing public comment and expert review. However, in response to complaints from the other commissioners, Chairman Martin delayed the merger vote to allow for public comment on the new changes. An official in FCC’s Office of General Counsel told us that there are legitimate concerns about stakeholders’ ability to respond to ex parte presentations made during the Sunshine period, pursuant to a Sunshine period exception, but added that if this occurs, stakeholders can request to be invited by FCC officials to file a counter ex parte communication. Finally, although parties are required to file a summary of ex parte contacts with FCC’s Secretary, all commissioners may not receive a copy of this summary. For example, if a paper copy is filed shortly before a scheduled vote, there may not be adequate time for the summary to be scanned and placed in the public record. FCC officials told us that there is currently no mechanism for notifying commissioners that ex parte summaries have been filed and added that commissioners rely on the public record to identify this information. Other federal agencies have implemented different guidelines for the ex parte process. For example, the Department of Transportation (DOT) issued an order and accompanying procedures, noting the importance of providing interested members of the public adequate knowledge of contacts between agency decision makers and the public during the rulemaking process. DOT establishes that if such contact occurs prior to the issuance of an NPRM and is one of the bases for the issuance of the NPRM, the contact should be discussed in the preamble of the notice. In addition, although DOT recommends holding such contact to a minimum after the close of the reply comment period, noting that contacts occurring at this stage of the process tend to be hidden, DOT states that if such contacts do occur, the meeting should be announced publicly or all persons who have expressed interest in the rulemaking should be invited to participate. In addition, DOT requires that records of such contacts be promptly filed in the public record and states that while a verbatim transcript is not required, a mere recitation that the listed participants met to discuss a named general subject on a specified day is inadequate. Rather, DOT notes that such records should include a list of the participants, a summary of the discussion, and a specific statement of any commitments made by department personnel. Officials from FTC told us that the agency personnel are responsible for submitting ex parte communications in writing to the FTC Secretary so that they can be placed on the public record. NRC officials told us that if comments submitted after the public comment period raise a significant new idea, NRC would place those comments in the record and might reopen the comment period to get reactions to the submission. NRC officials also noted that when NRC issues a request for public comments, comments received after the due date will be considered if it is practical to do so, and that NRC does reopen or extend a comment period to give people more time to consider complex issues. Stakeholders concerned about FCC’s current ex parte process have suggested a number of changes. Some of the suggestions included enhancing FCC’s guidelines regarding ex parte summaries to include requiring that FCC officials reject incomplete ex parte summaries or requiring them to certify that the ex parte summaries they receive accurately capture the substance of the information provided in meetings, improving FCC’s enforcement of its ex parte requirements, and limiting FCC’s use of last-minute ex parte contacts to inform its decisions. An FCC official noted that one possible solution to ex parte submissions made during the Sunshine period would be to create an automatic right to respond for other stakeholders, but added that allowing for more contact during the Sunshine period would run counter to the idea of establishing a quiet period for the commissioners to consider an issue before voting. FCC is currently in the process of considering possible revisions to its ex parte policies and is exploring new methods of collecting public comment. One method under consideration includes collecting comments through its www.broadband.gov Web site, which allows members of the public to comment on a blog, request ex parte meetings, and obtain information about upcoming workshops. On October 28, 2009, FCC held a workshop on improving disclosure of ex parte contacts, during which participants discussed possible revisions to FCC’s current ex parte rules and processes. Some academic and industry stakeholders have voiced concerns that FCC’s merger review process allows the agency to implement policy decisions without going through the rulemaking process. Companies holding licenses issued by FCC and wishing to merge must obtain approval from two federal agencies: the Department of Justice (DOJ) and FCC, which do not follow the same standards when reviewing mergers. While DOJ is charged with evaluating mergers through an antitrust lens, FCC examines proposed mergers under its Communications Act authority to grant license transfers. The act permits the commission to grant the transfer only if the agency determines that the transaction would be in the “public interest, convenience, and necessity.” A recent Congressional Research Service report noted that the public interest standard is generally considered broader than the competition analysis authorized by the antitrust laws and conducted by DOJ. The report concludes that the commission possesses greater latitude to examine other potential effects of a proposed merger beyond its possible effect on competition in the relevant market. In addition, FCC negotiates and enforces voluntary conditions on license transfers under the authority provided by §303(r) of the Communications Act, which grants the commission the authority to “prescribe such restrictions and conditions, not inconsistent with the law, as may be necessary to carry out the provisions” of the act, and §214(c), which grants the commission the power to place “such terms and conditions as in its judgment the public convenience and necessity may require.” Several stakeholders told us that FCC has used its merger review authority to get agreements from merging parties on issues that affect the entire industry and should be handled via rulemaking, rather than fashioning merger-specific remedies. Stakeholders argue that this may lead to one set of rules for the merged parties and another set of rules for the rest of the industry. For example, rather than using an industry-wide rulemaking to address the issue of whether local telephone companies should be required to provide Digital Subscriber Line (DSL) service without requiring telephone service, FCC imposed this requirement solely on AT&T and Verizon during merger reviews. One stakeholder stated that by addressing broad policy issues through merger reviews rather than rulemakings, FCC is limiting public insight and participation in the regulatory process. Other stakeholders argue that FCC’s merger review process provides a needed public interest perspective. In addition to concerns about FCC’s merger review process, there are also concerns about how FCC enforces its merger conditions. For example, one observer noted that despite requests from consumer groups such as Media Access Project and Public Knowledge, FCC declined to adopt specific enforcement mechanisms to ensure compliance with a series of conditions imposed during the merger review of XM and Sirius, including an “a la carte” mandate and a requirement to provide noncommercial channels. FCC officials told us that each bureau is responsible for ensuring merger conditions are adhered to. As part of the general decrease in FCC staff that occurred from fiscal year 2003 to 2008, the number of engineers and economists at FCC declined. (See fig. 3.) From fiscal year 2003 to 2008, the number of engineers at FCC decreased by 10 percent, from 310 to 280. Similarly, from fiscal year 2003 to 2008, the overall number of economists decreased by 14 percent, from 63 to 54. Although the number of engineers and economists has decreased from 2003 to 2008, the percentage of the workforce comprised of engineers and economists remained the same. The overall decline in the number of key occupational staff occurred during a period of increased need for technical, economic, and business expertise. New technologies, such as rapid growth in handheld and wireless devices, are challenging existing regulatory structures. FCC also cited a number of economic issues that impact the expertise and workforce required, such as marketplace consolidation and the need to craft economic incentives for incumbent spectrum users to relocate to other spectrum. Additionally, 24 percent of FCC staff responses to the 2008 Office of Personnel Management (OPM) Federal Human Capital Survey disagreed with the statement “the skill level in my work unit has improved in the last year.” This was significantly more than the 17 percent of staff from all other agencies responding to the survey who disagreed with the statement. Similarly, several stakeholders we interviewed echoed the importance of increasing the level of expertise in certain areas at FCC and cited concerns regarding insufficient numbers of staff. In addition to the decrease in engineers and economists, FCC faces challenges in ensuring that its workforce remains experienced and skilled enough to meet its mission, including a large number of staff who will be eligible for retirement. FCC estimates that 45 percent of supervisory engineers are projected to be eligible for retirement by 2011. While FCC has started hiring a larger number of engineers to replace retiring engineers and augment its engineering staff, most hires have been at the entry level. Of the 53 engineers hired in fiscal years 2007 and 2008, 43 were entry-level hires. During this same period, 30 engineers retired. Stakeholders stated that recent graduates sometimes have little experience or understanding of how policies affect industry. Increasing the number of staff with backgrounds and experience in industry would help improve FCC’s understanding of industry issues and can lead to better policies, according to stakeholders. For economists, FCC faces an even higher share of staff eligible for retirement by 2011. FCC reports that, as of April 2009, 67 percent of supervisory economists will be eligible to retire, as shown in table 1. FCC may face challenges in addressing these impending retirements because 56 percent of nonsupervisory economists are also eligible to retire, and FCC has not hired any economists in fiscal years 2007 and 2008. Despite these trends, it is not clear how significantly the agency has been impacted in its ability to meet its mission. For example, the 2008 OPM Federal Human Capital Survey showed that, similar to the rest of government, 75 percent of FCC staff agreed with the statement that the workforce has the knowledge and skills necessary to accomplish organization goals. Agency officials also noted that they can shift staff from one bureau to another as needs arise and the regulatory environment changes. For example, as the need for tariff regulation decreased, FCC shifted staff from that area into other areas. However, an FCC official indicated that with the decrease in the number of experienced engineers throughout the agency, more work has shifted to OET. The official added that if the bureaus had additional resources to recruit and retain more experienced engineers, then they could handle more complex issues within the bureau without relying on OET as much. Furthermore, additional engineering staff would allow the bureau to reduce the amount of time it takes to conduct analyses and draft items. Additionally, former FCC officials told us that OSP needs additional resources to fulfill its mission. FCC faces multiple challenges in recruiting new staff. One challenge FCC faces (similar to other federal agencies) is the inability to offer more competitive pay. Additionally, not having an approved budget and working under congressional continuing resolutions has hampered hiring efforts for engineers and economists. Competing priorities may also delay internal decisions regarding hiring. For example, OSP has not received the budgetary allocation for hiring new economists in time for the annual American Economic Association meeting for at least the past 4 years. This meeting is the primary recruiting venue for recently-graduated economists. When FCC is not able to hire economists at the annual meeting, the agency potentially loses out on skilled employees who have been offered employment elsewhere. FCC officials told us that OSP has received permission to attend the 2010 American Economic Association meeting and hire at least one economist. FCC also faces issues regarding the morale and motivation of its staff. According to the 2008 OPM Federal Human Capital Survey, FCC staff responses were significantly lower than other federal agencies’ staff in areas related to motivation, engagement, and views of senior leadership. (See table 2.) Low levels of motivation, commitment, and personal empowerment may exacerbate the challenges FCC faces in recruiting and maintaining an experienced staff. For example, stakeholders told us that part of attracting and retaining professional staff is using and valuing their expertise. If expertise is not used or valued, as has occurred in some instances at FCC, then this can have a negative impact on FCC’s ability to recruit top candidates in a given professional field. FCC officials told us that in response to the results from the OPM Federal Human Capital Survey, FCC identified leadership and communication skills as areas of focus. To address these needs, FCC has developed an internal Web site that provides a forum for communication and solicitation of information, concerns, and suggestions from staff within FCC. In support of leadership, FCC is working to implement an executive leadership program for existing leaders and an emerging leadership training program to identify potential leaders within FCC and enhance their skills. FCC has instituted hiring and staff development programs designed to recruit new staff and develop the skills of its existing staff. While these programs are positive steps that can help attract, retain, and train new staff, it is not clear that these efforts are sufficient to address expertise gaps caused by retirements. Specific efforts include the following: FCC University was established to provide the resources needed to increase the fluency of commission staff in a number of competency areas. Subject matter experts have been continuously and actively involved in defining the training needs and evaluating, designing, and delivering internal courses, and in updating the courses available in the FCC University catalog. Excellence in Engineering Program: A program that includes both basic and advanced courses in communications technology, a graduate degree program in engineering, and a knowledge-sharing program to increase the exchange of information among staff. The Excellence in Engineering award recognizes engineers, scientists, and other technical staff for outstanding contributions performed in the course of their work at the commission. Excellence in Economic Analysis Program: A program to ensure staff is fluent in the principles of communication economics. The program consists of ongoing training and development opportunities targeted at, but not limited to, staff economists, economics training for noneconomists, and research tools such as data analysis software. Another component of the program is the Excellence in Economic Analysis Award, which recognizes outstanding contributions to economic analysis at FCC based on the impact of the contribution on FCC policy or its significance for the general base of knowledge in economics or public policy analysis. Engineer in Training Program: A combined recruitment and accelerated promotion program designed to attract recent engineering graduates and provide them with accelerated promotion opportunities through successful completion of on-the-job training. FCC has also pursued a variety of strategies to address new expertise needs and human capital challenges. In certain cases, FCC has been able to use direct-hire authority, which streamlines and expedites the typical competitive placement process. FCC was granted direct-hire authority from OPM in response to congressionally-mandated requirements for a national broadband plan. In addition to using direct-hire authority, FCC used appointing authorities, which are outside of the competitive hiring processes, such as Recovery Act appointing authority, temporary consultants, and student appointments, as well as details for staff from other federal agencies to more quickly ramp up its broadband efforts. FCC also makes multiple efforts to determine the critical skills and competencies that are needed to achieve its mission, including meetings with bureau chiefs, as well as surveys of supervisors and staff. It has set forth occupation-specific competencies for its three key professional areas—engineers, attorneys, and economists. As part of FCC’s workforce planning efforts, bureau and office chiefs identify, justify, and make their requests for positions, including the type of expertise needed, directly to the chairman’s office. According to FCC, the chairman’s office considers these requests from a commissionwide perspective, which includes the agency’s strategic goals, the chairman’s priorities, and other factors such as congressional mandates. The chairman’s office communicates the approval of requests directly to the bureau or office chiefs and informs the Office of Managing Director of the decision. Human resources works with bureaus and offices to implement approved hiring. This process can make it difficult for FCC to develop and implement a long-term workforce plan because workforce needs are driven by short- term priorities and are identified by compartmentalized bureaus rather than by a cohesive long-range plan that considers emerging issues. In addition, an FCC official noted that since FCC is a small agency and expertise needs change quickly, a particular area could be fully staffed with no need for additional hiring, but if two staff leave in a short time period, then an expertise gap could quickly develop and new staff would need to be hired. FCC officials told us that, because of this, they avoid laying out specific targets that might be impossible or undesirable to achieve due to evolving needs. Additionally, FCC officials told us that due to its size and limited hiring opportunities, it is important for the chairman and senior leadership to be able to adjust the goals identified in its Strategic Human Capital Plan. Without specific targets, FCC cannot monitor and evaluate the agency’s progress toward meeting its expertise needs. Previously, we identified several key principles that strategic workforce planning should address, including determining the critical skills and competencies that will be needed to achieve current and future programmatic results; developing strategies that are tailored to address gaps in the number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies; and monitoring and evaluating an agency’s progress toward meeting its human capital goals. Periodic measurement of an agency’s progress toward human capital goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. For example, a workforce plan can include measures that indicate whether the agency executed its hiring, training, or retention strategies as intended and achieved the goals for these strategies, and how these initiatives changed the workforce’s skills and competencies. FCC has made efforts to determine the skills and competencies that are needed to achieve programmatic goals and has developed workforce hiring and training strategies. In addition, FCC’s current Strategic Human Capital Plan identifies skills and subspecialties needed in the future workforce. However, FCC’s Strategic Human Capital Plan does not establish specific targets for these needs or measures for evaluating its progress in meeting these skill needs. FCC officials told us they expect to develop a revised Strategic Human Capital Plan in support of a new FCC Strategic Plan, which they anticipate completing by the end of fiscal year 2010. Additionally, FCC is also in the process of finalizing an OPM-required accountability plan to accompany its Strategic Human Capital Plan. It remains unclear whether FCC’s actions are sufficient to ensure that it retains a skilled workforce that can achieve its mission in the future. FCC regulates the telecommunications industry—an industry that is critical to the nation’s economy and public safety and that directly affects the ways in which Americans conduct business, socialize, and get their news and entertainment. In recent years, the industry has rapidly evolved, and changing technologies have created new issues that span FCC bureaus and require the expertise of a variety of FCC staff. These changes highlight the need for FCC to ensure that its decisions are fully informed by promoting internal communication and coordination among various bureaus and offices, ensuring commissioner access to staff analyses, effectively collecting public input on its proposed policy changes, and developing methods to ensure it has the staff expertise needed to address these issues. However, we identified several challenges in these areas. At the bureau and office level, FCC’s lack of written procedures for facilitating the flow of information within the agency has in some cases led to ineffective interbureau coordination and allowed prior chairmen to limit internal communication among staff. In addition, it is unclear whether the roles of OET and OSP—two offices established to provide independent expertise on complex, crosscutting issues—are clearly defined or are overly subject to a chairman’s preferences. Without written interbureau coordination procedures or clearly defined roles and responsibilities, FCC may be limited in its ability to address crosscutting issues. At the commission level, the lack of statutory requirements or internal policies on commissioners’ rights and responsibilities during the decision- making process, including their right to bureau and office analysis, has allowed some chairmen to control how and when commissioners receive information from the bureaus and offices. Other independent regulatory agencies have varied in how they address this issue. Ultimately, if commissioners do not have adequate access to information, then the benefits of the commission structure—robust group discourse and informed deliberation and decision making—may be hampered. In addition, while FCC relies heavily on public input to inform its decisions, we found two primary weaknesses in its processes for collecting that input. First, FCC’s use of NPRMs to pose broad questions without providing actual rule text can limit stakeholders’ ability to determine either what action FCC is considering or what information would be most helpful to FCC when developing a final rule. Second, although FCC has developed rules intended to protect the fairness of ex parte proceedings, FCC neither provides detailed guidance on what constitutes a sufficient ex parte summary, nor has a process for proactively ensuring that ex parte summaries are complete. If parties are able to submit vague ex parte summaries that may not fully reflect meetings between FCC officials and outside parties, then stakeholders will continue to question whether commission decisions are being influenced by information that was not subject to public comment or rebuttal and that, in some cases, is submitted just before a commission vote. FCC is currently exploring new methods of collecting public comment and potential revisions to its ex parte process. Finally, at a time when the telecommunications industry has become increasingly complex, a large percentage of FCC’s economists and engineers will be eligible for retirement by 2011, and FCC has faced challenges in recruiting new staff. FCC has taken several positive steps to help meet its workforce needs, including instituting hiring and staff development programs and beginning efforts to identify its current workforce expertise needs. However, continued focus on identifying and instituting additional methods that improve its flexibility to meet its expertise needs, and developing measures for tracking its progress toward meeting its needs, will help to ensure that FCC is well-positioned to anticipate and address its current and future workforce and expertise needs. We have identified four areas of concern and are making seven recommendations to address these concerns. To ensure interbureau coordination on crosscutting issues, we recommend that the Federal Communications Commission (FCC) take the following two actions: Develop written policies outlining how and when FCC will identify issues under the jurisdiction of more than one bureau; determine which bureau will serve as the lead on crosscutting issues and outline the responsibilities entailed regarding coordinating with other bureaus; and ensure that staff from separate bureaus and offices can communicate on issues spanning more than one bureau. Review whether it needs to redefine the roles and responsibilities of the Office of Engineering and Technology (OET) and the Office of Strategic Planning and Policy Analysis (OSP) and make any needed revisions. To clarify FCC’s policies on providing commissioners access to information from bureaus and offices about agenda items, we recommend FCC take the following two actions: Each chairman, at the beginning of his or her term, develop and make publicly available internal policies that outline the extent to which commissioners can access information from the bureaus and offices during the decision-making process, including how commissioners can request and receive information. Provide this policy to FCC’s congressional oversight committees to aid their oversight efforts. To improve the transparency and effectiveness of the decision-making process, we recommend that FCC take the following two actions: Where appropriate, include the actual text of proposed rules or rule changes in either a Notice of Proposed Rulemaking or a Further Notice of Proposed Rulemaking before the commission votes on new or modified rules. Revise its ex parte policies to include modifying its current guidance to further clarify FCC’s criteria for determining what is a sufficient ex parte summary and address perceived discrepancies at the commission on this issue; clarifying FCC officials’ roles in ensuring the accuracy of ex parte summaries and establish a proactive review process of these summaries; and creating a mechanism to ensure all commissioners are promptly notified of substantive filings made on items that are on the Sunshine Agenda. To improve FCC’s workforce planning efforts, we recommend that FCC take the following action: In revising its current Strategic Human Capital Plan, include targets that identify the type of workforce expertise needed, strategies for meeting these targets—including methods to more flexibly augment the workforce—and measures for tracking progress toward these targets. FCC provided written comments, which are reproduced in appendix III. In its comments FCC generally concurred with our recommendations and noted that they have already begun taking steps to address the areas of concern identified in our recommendations. For example, FCC stated that it is in the midst of a review of FCC’s existing processes. As part of this process, FCC is reviewing prior procedures for interbureau communication, as well as prior and current practices for commissioner and staff communication. FCC stated that it would identify and incorporate lessons learned and best practices into future internal procedures. FCC did not specifically state whether future policies on commissioner access to bureau and office information during the decision- making process would be made publicly available or provided to FCC’s congressional oversight committees. We believe these would be important steps in improving the transparency of FCC’s decision-making process. FCC also did not specifically discuss our recommendation that it review whether it needs to redefine the roles and responsibilities of OET and OSP and make any needed revisions. Regarding the public comment process, FCC stated that it has worked to include the text of proposed rules in recently issued NPRMs. However, FCC did not state whether this would be an ongoing policy. FCC also noted that the Office of General Counsel is in the midst of reviewing proposals for modifying the current ex parte process, and stated that this may lead to a rulemaking to address this issue. Finally, FCC believes that it does not face significant challenges in recruiting top candidates and stated that its unique mission and the influence of its regulatory activities on the communications industry and practices help it attract qualified candidates. However, it concurred that revisions to the current Strategic Human Capital Plan should include targets and measures for tracking progress toward these targets. We recognize FCC’s efforts to enhance internal and external communication, to update its comment filing system, and to continue to review other existing processes and workforce planning efforts. However, addressing our specific recommendations will further enhance FCC’s efforts to date by promoting internal communication and coordination, clarifying policies on commissioner access to staff analyses, enhancing FCC’s methods for collecting public input, and developing methods to ensure it has the staff expertise it needs. In addition, we provided the Federal Energy Regulatory Commission, Federal Trade Commission, and Nuclear Regulatory Commission with a draft of this report for review and comment. They did not offer any comments on our findings or recommendations, but provided technical corrections which we incorporated. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies to the Chairman of the Federal Communications Commission and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The report examines Federal Communications Commission’s (FCC) organization, decision-making process, and personnel management. In particular, the report provides information on (1) the extent to which FCC’s bureau structure presents challenges for the agency in adapting to an evolving marketplace; (2) the extent to which FCC’s decision-making processes present challenges for FCC, and what opportunities, if any, exist for improvement; and (3) the extent to which FCC’s personnel management and workforce planning efforts ensure that FCC has the workforce needed to achieve its mission. To respond to the overall objectives of this report, we interviewed current and former officials from FCC, including former chief economists and chiefs of staff, bureau and office chiefs and acting bureau and office chiefs, commissioners, and chairmen. In addition, we reviewed FCC documents, as well as relevant legislation, federal regulations, and GAO reports on the FCC and areas of focus for this review such as internal controls and workforce planning. We also interviewed industry associations representing broadcast and cable television, public television, consumer electronics, wireless, and telecommunications companies, public interest groups, and other individuals, such as academics with extensive telecommunications experience. Table 3 lists the organizations with whom we spoke. To describe the challenges FCC’s bureau structure presents the agency in adapting to an evolving marketplace, we reviewed FCC’s major internal reorganizations since the Telecommunications Act of 1996. We analyzed FCC procedures, applicable laws, and reviewed academic literature on organizational theory and various FCC reform proposals. We also reviewed academic literature on the commission structure, organizational theory, and various FCC reform proposals from a number of stakeholders. We used GAO’s internal control and management tool to identify key mechanisms for facilitating the flow of information within an organization. To determine challenges the commission decision-making process presents for FCC and opportunities for improvement, we reviewed literature on federal rulemaking and potential reforms and on the commission structure and decision-making process. We reviewed FCC internal decision-making documents and the public comments of current and former FCC commissioners and former chairmen to determine how the decision-making process works. We also interviewed officials from independent regulatory agencies including the Nuclear Regulatory Commission, Federal Energy Regulatory Commission, and the Federal Trade Commission and, where available, reviewed their internal commission procedures to understand how other independent regulatory agencies implement the commission decision-making process. We reviewed FCC’s decision-making procedures and public comment and ex parte rules, and compared certain aspects to standards established in GAO’s internal control standards and other relevant documents. In addition, we interviewed industry, consumer advocate, and regulatory representatives to gain their perspectives on providing information to FCC during the decision-making process and to identify alternative approaches to the decision-making process. Finally, we reviewed FCC documents, policy papers from outside stakeholders, letters to the Presidential Transition Task Team, as well as proposed legislation to determine proposals for altering FCC’s public comment process. To examine whether FCC’s personnel management and workforce planning efforts ensure that FCC has the workforce needed to achieve its mission, we reviewed prior GAO products related to strategic workforce planning and human capital challenges. We then reviewed FCC-generated data on overall staff levels, hiring, attrition, and retirement eligibility over the period of 2003 to 2008. We also reviewed FCC’s 2007-2011 Strategic Human Capital Plan to determine the challenges FCC has identified for addressing future workforce issues, as well as its proposed solutions. We reviewed FCC’s methods for identifying needed skill sets and competencies, including surveys of staff and focus groups. We analyzed results from the Office of Personnel Management’s (OPM) Federal Human Capital Survey for 2008 and compared FCC’s responses on various items with the responses of the rest of U.S. government staff. We performed our review from August 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our review objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FCC staff is organized into seven operating bureaus and 10 staff offices. The bureaus’ responsibilities include: processing applications for licenses and other filings; analyzing complaints; conducting investigations; developing and implementing regulatory policies and programs; and taking part in hearings. FCC’s offices provide support services for the bureaus and commission. Office of Inspector General: The Office of Inspector General conducts and supervises audits and investigations relating to FCC’s operations. The Inspector General reports to the chairman and informs the chair and Congress of fraud or any serious problems with the administration of FCC programs and operations discovered during audits and investigations; reviews and recommends corrective action, where appropriate; and reports on progress made in the implementation of those corrective actions. Office of Engineering and Technology: The Office of Engineering and Technology (OET) advises FCC on engineering matters, manages spectrum, and provides leadership in creating new opportunities for competitive technologies and services for the American public. OET allocates spectrum for nonfederal use and provides expert advice on technical issues before the commission, including helping commissioners understand the tradeoffs of technical issues. In addition to providing technical guidance to the commissioners, FCC’s other bureaus rely on OET to provide leadership on high-level technical and engineering issues that do not fall within the scope of a particular bureau and to provide advice on technical issues handled in the bureaus. Office of General Counsel: The Office of General Counsel serves as the chief legal advisor to the commission and to its various bureaus and offices. The General Counsel also represents the commission in litigation in federal courts, recommends decisions in adjudicatory matters before the commission, assists the commission in its decision-making capacity, and performs a variety of legal functions regarding internal and other administrative matters. Office of Managing Director: The Office of Managing Director functions as chief operating official, serving under the direction and supervision of the chairman. The office develops and manages FCC’s budget and financial programs, personnel management process and policy, develops and implements agencywide management systems, coordinates the commission meeting schedule, and manages the distribution and publication of official FCC documents. Office of Media Relations: The Office of Media Relations is responsible for the dissemination of information on commission issues. The office is responsible for coordinating media requests for information and interviews on FCC proceedings and activities and for encouraging and facilitating media dissemination of commission announcements, orders, and other information. Office of Administrative Law Judges: The Office of Administrative Law Judges is responsible for conducting the hearings ordered by the commission. The hearing function includes acting on interlocutory requests filed in the proceedings, such as petitions to intervene, petitions to enlarge issues, and contested discovery requests. An administrative law judge, appointed under the Administrative Procedures Act, presides at the hearing during which documents and sworn testimony are received in evidence, and witnesses are cross-examined. At the conclusion of the evidentiary phase of a proceeding, the presiding administrative law judge writes and issues an initial decision which may be appealed to the commission. Office of Legislative Affairs: The Office of Legislative Affairs is the FCC’s liaison to Congress and provides lawmakers with information regarding FCC regulatory decisions, answers to policy questions, and assistance with constituent concerns. The office also prepares FCC witnesses for congressional hearings and helps create FCC responses to legislative proposals and congressional inquiries. Additionally, the office is a liaison to other federal agencies, as well as state and local governments. Office of Communications and Business Opportunities: The Office of Communications and Business Opportunities provides advice to the commission on issues and policies concerning opportunities for ownership by small, minority, and women-owned communications businesses. The office works with entrepreneurs, industry, public interest organizations, individuals, and others to provide information about FCC policies, increase ownership and employment opportunities, foster a diversity of voices and viewpoints over the airwaves, and encourage participation in FCC proceedings. Office of Workplace Diversity: The Office of Workplace Diversity advises the commission on all issues related to workforce diversity, affirmative recruitment, and equal employment opportunity. Office of Strategic Planning and Policy Analysis: The Office of Strategic Planning and Policy Analysis (OSP) is responsible for working with the chairman, the commissioners, bureaus, and offices to develop a strategic plan identifying short- and long-term policy objectives for the agency. OSP consists of economists, attorneys, and MBAs who serve as expert consultants to the commission in areas of economic, business, and market analysis and other subjects that cut across traditional lines, such as the Internet. The office also reviews legal trends and developments not necessarily related to current FCC proceedings, such as intellectual property law, the Internet, and e-commerce issues. International Bureau: The International Bureau represents the commission in satellite and international matters. This includes advising the chairman and commissioners on matters of international telecommunications policy and the status of the commission’s actions to promote the vital interests of the American public in international commerce, national defense, and foreign policy areas. The bureau also develops, recommends, and administers policies, rules, and procedures for the authorization and regulation of international telecommunications facilities and service and domestic and international satellite systems. Wireless Telecommunications Bureau: The Wireless Telecommunications Bureau handles all FCC domestic wireless telecommunications programs and policies—except those involving public safety, satellite communications, or broadcasting—including licensing, enforcement, and regulatory functions. Wireless communications services include cellular telephone, paging, personal communications services, and other commercial and private radio services. The bureau also regulates the use of radio spectrum to fulfill the communications needs of business, aircraft and ship operators, and individuals. The bureau is responsible for implementing the competitive bidding authority for spectrum auctions. Enforcement Bureau: The Enforcement Bureau is responsible for enforcing provisions of the Communications Act of 1934, FCC’s rules and orders, and the terms and conditions of station authorizations. Major areas of enforcement that are handled by the Enforcement Bureau are (1) consumer protection enforcement, (2) local competition enforcement, and (3) public safety and homeland security enforcement. Consumer and Governmental Affairs Bureau: The Consumer and Governmental Affairs Bureau (CGB) develops and implements the commission’s consumer policies, including disability access. The bureau conducts consumer outreach and education and maintains a Consumer Center that responds to consumer inquiries and complaints. CGB also maintains collaborative partnerships with state, local, and tribal governments in areas such as emergency preparedness and implementation of new technologies. Media Bureau: The Media Bureau develops, recommends, and administers the policy and licensing programs relating to electronic media, including cable television, broadcast television, and radio in the United States and its territories. The Media Bureau also handles postlicensing matters regarding direct broadcast satellite service. Wireline Competition Bureau: The Wireline Competition Bureau develops and recommends policy goals, objectives, programs, and plans for the commission on matters concerning wireline telecommunications. The Wireline Competition Bureau’s overall objectives include ensuring choice, opportunity, and fairness in the development of wireline telecommunications services and markets; developing deregulatory initiatives; promoting economically efficient investment in wireline telecommunications infrastructure; promoting the development and widespread availability of wireline telecommunications services; and fostering economic growth. Public Safety and Homeland Security Bureau: The Public Safety and Homeland Security Bureau is responsible for developing, recommending, and administering the agency’s policies pertaining to public safety communications issues. These policies include 911 and E911, operability and interoperability of public safety communications, communications infrastructure protection and disaster response, and network security and reliability. The bureau also serves as a clearinghouse for public safety communications information and takes the lead on emergency response issues. In addition to the contact listed above, Andrew Von Ah (Assistant Director), Eli Albagli, Pedro Almoguera, Thomas Beall, Timothy Bober, Crystal Huggins, Delwen Jones, Aaron Kaminsky, Joshua Ormond, Sarah Veale, and Mindi Weisenbloom made major contributions to this report.
Rapid changes in the telecommunications industry, such as the development of broadband technologies, present new regulatory challenges for the Federal Communications Commission (FCC). Government Accountability Office (GAO) was asked to determine (1) the extent to which FCC's bureau structure presents challenges for the agency in adapting to an evolving marketplace; (2) the extent to which FCC's decision-making processes present challenges for FCC, and what opportunities, if any, exist for improvement; and (3) the extent to which FCC's personnel management and workforce planning efforts face challenges in ensuring that FCC has the workforce needed to achieve its mission. GAO reviewed FCC documents and data and conducted literature searches to identify proposed reforms, criteria, and internal control standards and compared them with FCC's practices. GAO also interviewed current and former FCC chairmen and commissioners, industry stakeholders, academic experts, and consumer representatives. FCC consists of seven bureaus, with some structured along functional lines, such as enforcement, and some structured along technological lines, such as wireless telecommunications and media. Although there have been changes in FCC's bureau structure, developments in the telecommunications industry continue to create issues that span the jurisdiction of several bureaus. However, FCC lacks written procedures for ensuring that interbureau collaboration and communication occurs. FCC's reliance on informal coordination has created confusion among the bureaus regarding who is responsible for handling certain issues. In addition, the lack of written procedures has allowed various chairmen to determine the extent to which interbureau collaboration and communication occurs. This has led to instances in which FCC's final analyses lacked input from all relevant staff. Although FCC stated that it relies on its functional offices, such as its engineering and strategic planning offices, to address crosscutting issues, stakeholders have expressed concerns regarding the chairman's ability to influence these offices. Weaknesses in FCC's processes for collecting and using information also raise concerns regarding the transparency and informed nature of FCC's decision-making process. FCC has five commissioners, one of which is designated chairman. FCC lacks internal policies regarding commissioner access to staff analyses during the decision-making process, and some chairmen have restricted this access. Such restrictions may undermine the group decision-making process and impact the quality of FCC's decisions. In addition, GAO identified weaknesses in FCC's processes for collecting public input on proposed rules. Specifically, FCC rarely includes the text of a proposed rule when issuing a Notice of Proposed Rulemaking to collect public comment on a rule change, although some studies have noted that providing proposed rule text helps focus public input. Additionally, FCC has developed rules regarding contacts between external parties and FCC officials (known as ex parte contacts) that require the external party to provide FCC a summary of the new information presented for inclusion in the public record. However, several stakeholders told us that FCC's ex parte process allows vague ex parte summaries and that in some cases, ex parte contacts can occur just before a commission vote, which can limit stakeholders' ability to determine what information was provided and to rebut or discuss that information. FCC faces challenges in ensuring it has the expertise needed to adapt to a changing marketplace. For example, a large percentage of FCC's economists and engineers are eligible to retire in 2011, and FCC faces difficulty recruiting top candidates. FCC has initiated recruitment and development programs and has begun evaluating its workforce needs. GAO previously noted that strategic workforce planning should include identifying needs, developing strategies to address these needs, and tracking progress. However, FCC's Strategic Human Capital Plan does not establish targets for its expertise needs, making it difficult to assess the agency's progress in addressing its needs.
Medicare began contracting with managed care plans on a cost- reimbursement basis in the 1970s. In 1982, Congress passed the Tax Equity and Fiscal Responsibility Act (TEFRA), which created the first Medicare risk contracting program for managed care plans beginning in 1985. The Medicare risk program evolved into today’s MA program. TEFRA also retained and authorized Medicare cost plans as an option if an organization did not have the capacity to bear the risk of potential losses, had an insufficient number of members to be eligible for a risk-sharing contract, or the organization elected to offer a cost plan rather than a risk plan. Enrollment in the Medicare risk program grew from nearly 498,000 in 1985 to about 5.2 million beneficiaries by 1997, primarily concentrated in urban counties. The Balanced Budget Act of 1997 (BBA) phased out the existing risk program and created a new risk program called Medicare+Choice (M+C). Under the M+C program, the method used to pay participating plans was revised significantly, and, due in part to these payment changes, by 2000 many health plans began to withdraw from the program. Enrollment fell from 6.3 million in 1999 to 4.6 million by 2003. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) renamed the M+C program Medicare Advantage and provided increases to MA plan payment rates and other program changes. MA enrollment increased steadily from 2003 to 2009. The MA program includes several types of plans: Local coordinated care plans (CCP) consist of: Health maintenance organizations (HMO), which have defined provider networks and primary care gatekeepers. Beneficiaries enrolled in HMOs generally are required to obtain services from hospitals and doctors in the plan’s network, but some HMOs offer a point-of-service option under which a beneficiary may elect to obtain services from a non-network provider, though at a higher out-of-pocket cost. Preferred provider organizations (PPO), which have defined provider networks and no requirement that beneficiaries obtain referrals for care. Beneficiaries enrolled in PPOs can use non-network providers, but at a higher out-of-pocket cost than in-network providers. Provider sponsored organizations (PSO), which doctors, hospitals, or other Medicare providers operate rather than a health insurance company. The providers that operate the PSO furnish the majority of the health care and share in the financial risk of providing the health care to the beneficiaries enrolled in the plan. Regional PPOs, which have a service area comprising 1 or more of 26 state-level or multistate-level CMS-defined regions. Regional PPOs are also CCPs. Special needs plans exclusively or disproportionately enroll special needs individuals. Special needs individuals are beneficiaries who are institutionalized, eligible for both Medicare and Medicaid, or have a disability or chronic condition. Special needs plans can be any type of CCP. Private fee-for-service (PFFS) plans, which are local plans that are not required to have a contracted provider network as long as they pay willing providers at least the Medicare FFS rate. MA benefit packages may include Medicare Part D coverage; however, all CCPs must offer at least one benefit package with Part D coverage. CMS pays cost plans the reasonable cost of the Medicare-covered services they furnish directly to, or arrange for, Medicare beneficiaries enrolled in their plan, less the value of the deductible and coinsurance. In addition to the costs directly related to the provision of health services, CMS also pays reasonable costs associated with operating a health plan, such as marketing, enrollment, and membership expenses. Cost plans receive an advance interim payment per member per month based on the cost plan’s estimated reimbursable costs. CMS and the cost plans make adjustments after the contract period to align the payments with the actual costs incurred following the plan’s submission of an independently certified cost report that details cost, utilization, and enrollment data for the entire contract period. As of December 2009, 18 organizations operated 22 cost plans, with enrollments ranging from 50 to 74,190. Of the 22 cost plans, 15 were open to enrollment. Nonprofit organizations operated 17 of the 22 cost plans. (See app. I for a list of the organizations that offer cost plans.) Eight cost plans offered Part D coverage in 2009. Cost plans served at least one county in 16 states and the District of Columbia in 2009. Cost plans were most prevalent in Minnesota, where 3 cost plans operated across most of the state. (See fig. 1.) Congress acted to curtail the expansion of cost plans multiple times. The BBA provided that upon enactment, with few exceptions, the Secretary of Health and Human Services could not enter into any new cost plan contracts and could not extend or renew a cost plan contract beyond December 31, 2002. Subsequent laws modified the circumstances under which existing cost plans could continue to operate. The MMA, for example, allowed existing cost plans to be extended indefinitely, with the exception that, beginning January 1, 2008, the Secretary could no longer renew or extend contracts for cost plans serving an area that for the previous year was also served by two or more regional CCPs or two or more local CCPs and that also met specified enrollment thresholds. A cost plan would only be required to leave the counties within its service area that were also served by the CCPs. Most recently, MIPPA extended the exception provision to January 1, 2010, meaning cost plans affected by this provision would close at the beginning of calendar year 2011. MIPPA also requires that the qualifying CCPs used to determine whether the cost plan must close must be offered by more than one organization (see fig. 2). Separately, CMS requires organizations operating cost plans to close their cost plans to new enrollment if they open an MA plan in the same service area. Based on 2009 enrollment information through December, CMS’s preliminary estimates were that 7 of the 22 cost plans would need to withdraw from some or all of their 2009 service area in 2011. Three of the 7 cost plans would need to withdraw from their entire service area. All 3 of these plans already were closed to new enrollment in 2009. In total, CMS estimated that approximately 8,000 beneficiaries enrolled in cost plans, or about 3 percent of total cost plan enrollment, are in counties where their cost plan must discontinue service in 2011. (See app. II for a list of the cost plans that would likely be affected by the MIPPA provision.) All beneficiaries enrolled in cost plans had multiple MA options available to them. Nearly 100 percent of beneficiaries enrolled in cost plans had at least 5 MA plans serving their county in June 2009, and more than 57 percent had a choice of 15 or more MA plans (see fig. 3). About 10 percent of beneficiaries enrolled in cost plans had no local CCPs in their county, which is the MA plan type with the highest enrollment. Two cost plans, located in Colorado and Texas, enrolled about 90 percent of the beneficiaries without a local CCP plan option. About 10 percent of beneficiaries in cost plans were in counties without an HMO, about 62 percent were in counties without a local PPO, and about 8 percent were in counties without a regional PPO. Approximately 42 percent of beneficiaries enrolled in a cost plan were in counties with five or more MA HMOs. All beneficiaries enrolled in a cost plan could enroll in a PFFS plan in June 2009. (See table 1.) Some of the differences between cost plans and MA plans that affect beneficiaries involve out-of-network coverage, enrollment periods, and prescription drug coverage. Cost plans generally scored higher than competing MA plans on the quality scores CMS reports, and their estimated out-of-pocket costs compared to competing MA plans and FFS varied by health status. Cost plans differ structurally from MA plans and Medicare FFS in several ways, including enrollment periods, out-of-network coverage, and prescription drug coverage. For example, cost plans that are open to enrollment must have an open enrollment period of at least 30 consecutive days annually, and some cost plans choose to allow new enrollment all year. Beneficiaries enrolled in cost plans may disenroll at any time. In contrast, beneficiaries enrolled in an MA plan can join, switch, or drop plans, or join FFS, only during certain specified enrollment periods. Beneficiaries enrolled in cost plans who receive Medicare-covered services out of network are covered by Medicare FFS. These services are therefore subject to Medicare FFS coinsurance and deductibles. Out-of- network coverage varies among MA plans according to plan type, but if offered, it is covered by the MA plan, not Medicare FFS. Medicare FFS beneficiaries can receive services from any provider that accepts Medicare. Beneficiaries enrolled in cost plans may obtain Medicare prescription drug coverage by enrolling in any stand-alone Part D plan or a Part D plan offered by the cost plan sponsor. Beneficiaries enrolled in MA plans must choose a Part D plan offered by the MA plan sponsor. Beneficiaries enrolled in PFFS plans also must choose a Part D plan offered by the PFFS sponsor, unless the sponsor does not offer one, in which case beneficiaries can choose any Part D Plan. Beneficiaries enrolled in FFS may choose any Part D plan. For additional information about structural differences between cost plans, MA plans, and Medicare FFS, see appendix III. Our analysis of CMS quality scores found that cost plans’ quality scores, on average, were higher than the average of competing MA plans. All 12 of the cost plans with plan summary scores, based on a scale of 1 (poor quality) to 5 (excellent quality), were rated higher than or the same as their MA competitors in the county with the cost plan’s highest enrollment. These 12 cost plans enrolled about 202,500 beneficiaries, or about 70 percent of the total cost plan enrollment nationwide. (See fig. 4.) The majority of cost plans had higher scores than their MA competitors in each of the five quality dimensions that make up the plan summary score. For example, 15 of the 18 cost plans with a score for the dimension “Ratings of Health Plans Responsiveness and Care,” which includes ratings of beneficiary satisfaction with the plan, were rated higher than their MA competitors. Similarly, 17 of the 20 cost plans with a score for the dimension “Staying Healthy,” which includes how often beneficiaries got various screening tests, vaccines, and other check-ups, were rated higher than their competitors. The majority of cost plans also rated higher than their competitors in the other three quality dimensions reported by CMS. (See fig. 5.) In 2009, estimated average out-of-pocket costs in cost plans, MA plans, and Medicare FFS varied by the health status of beneficiaries. In general, beneficiaries 80 to 84 years old reporting poor health had lower estimated average out-of-pocket costs in cost plans compared to competitor MA plans and Medicare FFS, while beneficiaries in the same age group in cost plans reporting good or excellent health had higher estimated average out- of-pocket costs. Specifically, we found estimated out-of-pocket costs for beneficiaries reporting poor health in 11 of the 12 cost plans without drug coverage to be lower than other MA options, on average, ranging from 69 to 99 percent of the competitor MA plans. Similarly, beneficiaries reporting poor health in all of the cost plans without drug coverage had out-of-pocket costs that ranged from 67 to 92 percent of Medicare FFS. We also found out-of-pocket costs to be lower by similar amounts for beneficiaries in poor health in 4 of the 8 cost plans with drug coverage compared to other MA plans with drug coverage. (See fig. 6.) For 80- to 84-year-old beneficiaries reporting good and excellent health in 8 of the 12 cost plans without drug coverage, we found estimated out-of- pocket costs, on average, that were 5 to 37 percent higher than in competing MA plans without drug coverage. Beneficiaries in the same age category reporting good health in 6 of the 12 cost plans without drug coverage had, on average, higher out-of-pocket costs than FFS, and beneficiaries reporting excellent health in 11 of the 12 cost plans had higher estimated out-of-pocket costs than FFS. Similarly, for beneficiaries reporting good and excellent health in 7 of the 8 cost plans with drug coverage, the estimated out-of-pocket costs were 6 to 36 percent higher than competitor MA plans with drug coverage. In June 2009, 9 of the 18 organizations offering cost plans also offered MA plans in some or all of their cost plans’ service area, which demonstrates that these organizations were capable of bearing financial risk. (See table 2.) These 9 organizations operated a total of 12 cost plans. The combined enrollment in these 12 cost plans in June 2009 was 124,467, or 43 percent of all beneficiaries in cost plans. Seven of the 12 cost plans enrolled fewer than 2,000 beneficiaries, and 2 of the plans enrolled more than 30,000 beneficiaries. Of the 9 organizations that offered both cost plans and MA plans, 1 organization’s only MA plan was a special needs plan. Special needs plans exclusively or disproportionately enroll special needs individuals, so these plans may not be an appropriate option for the beneficiaries enrolled in this organization’s cost plan. All eight organizations offering both cost plans and MA plans that were not special needs plans operated at least one MA plan in some or all of their cost plan’s service area. Seven operated at least one MA plan in their cost plan’s entire service area and one operated at least one MA plan in part of their cost plan’s service area. CMS requires that organizations that offer an MA plan in the same service area as their cost plan close the cost plan to new enrollment. However, officials from CMS stated that there are some exceptions to this requirement. For instance, some organizations were able to keep their cost plan open to enrollment because the units of the organization that contracted with CMS were two distinct entities. The officials stated that they may inspect these organizations to ensure that they do not share beneficiary information across companies and do not route certain beneficiaries into different plans to maximize their profits. The Medicare managed care enrollment composition for the eight organizations that operated both a cost plan and an MA plan that were not special needs plans varied. All eight of these organizations had a total enrollment, including both their cost plan enrollment and MA enrollment, of at least 3,000 beneficiaries. Four of the eight organizations enrolled more than 95 percent of their total Medicare managed care enrollment in their MA plans. Another organization’s Medicare managed care enrollment was fairly evenly split between its MA plan and cost plan. The remaining three organizations had more than 75 percent of their Medicare managed care enrollment in their cost plan. Six of the 11 MA plans offered by organizations that offered both cost plans and MA plans were local HMOs (see table 3). Officials from organizations that offered cost plans cited potential future changes to MA payments and difficulty assuming financial risk as concerns about converting cost plans to MA plans. Officials also expressed concerns about the potential disruption to beneficiaries that could be caused by transferring beneficiaries in cost plans to an MA plan. Officials from organizations that offered cost plans reported that potential changes to MA payments were a significant concern in their decision about whether to convert their cost plans to MA plans. Officials from 13 of the 18 organizations that offered cost plans identified past payment changes in the Medicare risk programs and the potential for future payment changes in the MA program as making the decision to convert difficult, though 6 of these organizations offered an MA plan in some or all of their cost plan’s service area in 2009. For instance, officials from one organization who told us that they would prefer to convert their cost plan to an MA plan, said they have not done so because of concerns future MA payment changes may then necessitate closing the plan. Recent congressional and administration proposals have called for slowing the increase in or reducing MA payments. Officials from some organizations said that the size of their enrollment was insufficient to manage the financial risk associated with the MA program. Officials from 5 of the 18 organizations that offered cost plans stated that their enrollment was too low to spread financial risk. For example, an official from 1 of these 5 organizations stated that, because of the plan’s location in a rural area, its enrollment would never be large, and its cost plan could not take on the financial risk. This official told us that a few high-cost beneficiaries would consume the payments the plan would receive from CMS. In June 2009, the cost plan enrollment levels for these 5 organizations ranged from fewer than 500 beneficiaries to about 37,000 beneficiaries. Despite the concerns of these 5 organizations, we found that plans of equivalent size were able to operate in the MA program. Nationwide, 130 MA plans, or about 21 percent of all MA plans, enrolled fewer than 500 beneficiaries, and 69 percent of MA plans enrolled from 501 to 37,000 beneficiaries. Eleven percent of MA plans enrolled more than 37,000 beneficiaries. According to CMS officials, after 3 years of operation, MA organizations should be able to meet the agency’s enrollment threshold of 1,500 in rural areas and 5,000 in urban areas. However, these officials noted that the total enrollment can include enrollment from other lines of business if the enrollment is with the same legal entity that holds the contract with CMS. Officials from 3 of the 18 organizations that offered cost plans expressed concern about meeting risk-based capital (RBC) requirements, should they be required to convert to an MA plan. An official from the Medicare Cost Contractors Alliance also expressed concern about the ability of some organizations that offered cost plans to raise RBC in the event of a required conversion to an MA plan. The official noted that nonprofit organizations operate most cost plans, and some of these organizations have reservations about their ability to raise additional capital. NAIC officials confirmed that if an organization converted a cost plan to an MA plan, thus assuming more financial risk, the organization would probably need to raise more capital, though the extent of capital needed would depend on the size of the organization and how much of the organization’s business was dependent on Medicare enrollment. Two of the three organizations that reported concerns about RBC did not have an MA plan in June 2009. The third organization operated at least one MA plan, but nearly 85 percent of the organization’s Medicare managed care enrollment was in its cost plan. Officials from more than half of the 18 organizations with cost plans stated they were concerned about the potential disruption to beneficiaries if they were required to convert to a MA plan. Some of these officials noted that the beneficiaries enrolled in their cost plan(s) would not understand the process and would default to Medicare FFS. Officials from two organizations with closed cost plans stated that they have tried in the past to transfer the beneficiaries enrolled in their cost plan into the organization’s MA plan, but had trouble convincing beneficiaries to change plans. In general, if an organization decided to convert a cost plan to an MA plan, the organization would need to close the cost plan and open a new MA plan, if the organization did not already have one. Beneficiaries who wish to enroll in an MA plan offered by the organization that offered their cost plan must affirmatively enroll in the organization’s MA plan. Those who do not choose a plan—whether unintentionally or by design—will be enrolled by default in Medicare FFS. CMS does have a standard process in place to alert beneficiaries when their MA or cost plan discontinues serving the beneficiaries’ area. CMS requires cost plans that discontinue serving an area to notify each Medicare beneficiary enrolled in the plan by mail at least 60 days prior to the end of the contract period and notify the general public at least 30 days prior to the end of the contract period. CMS stated that they would strongly suggest that the cost plans adhere to the more stringent MA requirements regarding plan closures, which require the organization offering the plan to notify each Medicare beneficiary enrolled in the plan at least 90 days before it stops operating by sending a CMS-approved notice to beneficiaries describing available alternatives for obtaining Medicare services within the service area, including MA plans and Medicare FFS. The organization also must publish a notice in one or more local newspapers at least 90 days before the end of the calendar year to alert the public. We provided a draft of this report to CMS and the Medicare Cost Contractors Alliance. CMS provided us with technical comments, which we have incorporated as appropriate, and representatives from the Medicare Cost Contractors Alliance provided us with oral comments. Officials from the Medicare Cost Contractors Alliance stated that it was important to know whether cost plans were open or closed to enrollment in our discussion of competitors and the discussion of organizations offering both cost plans and MA plans. Information on the enrollment status of cost plans is provided in appendices I and II. The Medicare Cost Contractors Alliance officials also stated that cost plans have been in the Medicare managed care market significantly longer than most MA plans and it is this experience that has led the organizations to be weary of potential payment changes to the MA program. In addition, the Medicare Cost Contractors Alliance officials provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. As of December 2009, 18 organizations operated 22 cost plans, with enrollments ranging from 50 to 74,190. Of the 22 cost plans, 15 were open to enrollment. Nonprofit organizations operated 17 of the 22 cost plans. The Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) provides that, the Secretary of Health and Human Services would not extend or renew cost plan contracts for service areas where, during the entire previous year, two or more regional coordinated care plans (CCP) or two or more local CCPs were offered by different organizations, if the MA plans met specified enrollment thresholds. Private fee-for-service plans are not CCPs. Appendix III: Structural Differences among Cost Plans, MA Plans, and Medicare FFS for Beneficiaries Medicare Advantage (MA) Preferred Provider Organization (PPO) (Local & Regional) MA Health Maintenance Organization (HMO) Medicare Fee-For- Service (FFS) Enrollment period Cost plans that are open to enrollment must have an open enrollment period of at least 30 consecutive days annually. When beneficiary becomes eligible for Medicare, during the Annual Election Period (AEP), the MA Open Enrollment Period (MA OEP), or a Special Enrollment Period (SEP). When beneficiary becomes eligible for Medicare, during the AEP, the MA OEP, or a SEP. When beneficiary becomes eligible for Medicare, during the AEP, the MA OEP, or a SEP. When beneficiary becomes eligible for Medicare, during the general enrollment period of January 1st to March 31st of each year, or during a SEP. Beneficiary may disenroll at any time and return to FFS. In most cases, beneficiary must stay enrolled for the calendar year in which coverage begins. In most cases, beneficiary must stay enrolled for the calendar year in which coverage begins. In most cases, beneficiary must stay enrolled for the calendar year in which coverage begins. In most cases, beneficiary must stay enrolled until next AEP or MA- OEP. Through Medicare FFS; beneficiary is responsible for FFS coinsurance and deductibles. Services received out of network will generally cost more, but they are covered by the plan. Beneficiary generally responsible for full cost of out-of- network services, however some plans may cover certain services out of network at a higher cost. Any Medicare- approved provider that accepts the plan’s terms. Any provider that accepts Medicare. Any stand-alone Medicare Prescription Drug Plan (PDP) or PDP offered by the cost plan organization. A Part D plan offered by the MA organization. A Part D plan offered by the MA organization. A PDP offered by the PFFS organization. If the organization does not offer a PDP plan, beneficiaries can choose any PDP. Any PDP. Subject to MA plan’s internal appeal process. Subject to MA plan’s internal appeal process. Subject to MA plan’s internal appeal process. Subject to Medicare appeals process. The Annual Election Period (AEP) is from November 15th to December 1st, the MA Open Enrollment Period (MA OEP) is from January 1st to March 1st, and Special Enrollment Periods (SEP) apply whenever a beneficiary meets certain criteria, such as moving out of their current plan’s service area. Beneficiaries have the right to appeal coverage decisions. This chart relates to the first of the five levels of appeals. In addition to the contact above, Christine Brudevold, Assistant Director; Lori Achman; Julianne Flowers; Hannah Marston; Sarah Marshall; Elizabeth T. Morrison; and Amanda Pusey made key contributions to this report.
Medicare cost plans--managed care plans paid based on the reasonable costs of delivering Medicare-covered services--enroll a small number of beneficiaries compared to Medicare Advantage (MA), Medicare's managed care program in which the plans accept financial risk if their costs exceed fixed payments received for each enrolled beneficiary. Despite the small enrollment, industry representatives stated that cost plans provide a managed care option in areas that traditionally had few or no MA plans. Current law allows existing cost plans to continue operating unless specific MA plans of sufficient enrollment serve the same area. In such cases, the cost plan must discontinue serving that area beginning in 2011. The Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) required the Government Accountability Office (GAO) to examine issues related to the conversion of Medicare cost plans to MA plans. In response, GAO (1) determined the MA options available to beneficiaries in cost plans, (2) described key differences for beneficiaries between cost plans, MA plans, and Medicare fee-for-service (FFS); (3) determined the extent to which organizations offering cost plans also offer MA plans; and (4) described concerns cost plans have about converting to MA plans. GAO analyzed data from the Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare. GAO also reviewed requirements for Medicare managed care plans and interviewed officials from all Medicare cost plans and CMS. All Medicare beneficiaries enrolled in the 22 cost plans had multiple MA options available to them. Nearly all beneficiaries enrolled in cost plans had at least 5 MA plans serving their county in June 2009, and more than 57 percent had a choice of 15 or more MA plans. Some of the differences between cost plans and MA plans that affect beneficiaries are out-of-network coverage, enrollment periods, and prescription drug coverage. Cost plans' quality scores, on average, were higher than the average of competing MA plans' scores in the county with the cost plan's highest enrollment. Estimated out-of-pocket costs varied between cost plans and other options depending on the self-reported health status of the beneficiary. In general, beneficiaries reporting poor health had lower estimated average out-of-pocket costs in most cost plans compared to competitor MA plans and FFS, while beneficiaries reporting good or excellent health had relatively higher estimated costs in most cost plans compared to MA plans and FFS. Half of the 18 organizations offering cost plans also offered at least one MA plan in some or all of their cost plans' service area. These 9 organizations operated a total of 12 cost plans. In general, organizations that offer cost plans and MA plans in the same service area must close their cost plan to enrollment. Officials from organizations that offered cost plans cited potential future changes to MA payments and difficulty assuming financial risk as concerns about converting cost plans to MA plans. Unlike cost plans, MA plans assume financial risk if payments from CMS do not cover their costs. Officials from 13 of the 18 organizations offering cost plans identified past and the potential for future payment changes in the MA program as reasons the decision to convert was difficult, though 6 of these organizations offered an MA plan in some or all of their cost plan's service area in 2009. Additionally, officials from 5 organizations said that their enrollment was insufficient to manage the financial risk plans would need to accept in the MA program. Officials from more than half of the organizations that offered cost plans also expressed concerns about the potential disruption to beneficiaries caused by transferring beneficiaries from cost plans to MA plans. GAO provided a draft of this report to CMS. CMS provided GAO with technical comments, which were incorporated as appropriate.
Currently, seven Class I railroads own and maintain over 61,000 bridges and over 800 tunnels, and 40 Class II and 509 Class III railroads own and maintain over 15,000 bridges. According to FRA documents, in 2002, the U.S. railroad network contained approximately one bridge for every 1.4 miles of track. Class I railroads operate on approximately 70 percent of the total route miles in the United States and generate 90 percent of total railroad revenues. Class II and III railroads also play a critical role in the national freight railroad network, serving as feeders to Class I main lines. According to the American Short Line and Regional Railroad Association (ASLRRA), Class II and III railroads handle one out of every four carloads moved on the U.S. freight railroad system. Between 1978 and 2004, railroad traffic on Class I railroads increased dramatically while the number of railroad track miles decreased, as evidenced by an increase in the ratio of train-miles to track-miles (see fig. 1). In addition, freight volumes increased, as evidenced by a 105 percent increase in ton-miles per route-mile since 1990, from 8.63 million in 1990 to 17.70 million in 2005 (see fig. 2). These changes have focused more and heavier traffic over fewer core lines, thereby increasing both the strain on and the importance of key bridges and tunnels, such as those over the Mississippi River and underneath Baltimore. Bridges and tunnels on the freight railroad network are aging and are susceptible to a variety of conditions that may cause wear or deterioration. Railroad bridges are constructed from timber, steel, masonry or concrete, or a combination of these materials. According to an FRA bridge survey completed in 1993, more than half of the nation’s railroad bridges were built before 1920. This survey, which FRA’s Chief Structural Engineer told us is largely applicable today, found that 36 percent of railroad bridges were made of timber, 32 percent of steel, and 20 percent of masonry; the remaining 12 percent of bridges were not identified by bridge type. Increased weight and traffic can cause fatigue in timber and steel bridges. Timber bridges are also susceptible to decay from weather and insects, and steel bridges near salt water may be susceptible to high rates of corrosion. Masonry bridges are more vulnerable to the effects of time and nature than to the weight of traffic, but reinforced concrete bridges are susceptible to the effects of traffic loads. According to FRA, from 1998 through 2006 a total of 22 train accidents, involving one injury and no fatalities, were attributed to bridge structural failures. The most recent fatality resulting from a bridge structural failure occurred in 1957. Likewise, very few major railroad tunnels have been built within the last 50 years, according to FRA’s Chief Structural Engineer, although some have undergone maintenance or capacity expansion in recent years. Some tunnels are driven directly through rock; some are lined with brick or stone masonry, concrete, or timber; and many tunnels include two or more types of construction. Tunnels do not take stress from train traffic in the same way that bridges do, but they are susceptible to drainage issues, and timber-lined tunnels are particularly susceptible to fires. According to FRA, from 1982 through 2006 there were five reportable train accidents whose cause could have been related to the tunnel structure. One of these accidents resulted in two injuries, and none of the accidents resulted in a fatality. Many railroad bridges and tunnels were designed to have long useful life- spans, but were built for use by different types of trains. Until recent years, stress from locomotives and cars did not exceed the original design loads for bridges. For example, steel bridges built between 1895 and 1916 were engineered for steam locomotives that inflicted greater stress on bridges than today’s locomotives. However, because of their increased weight, freight cars are approaching the design load limits of older bridges. Railcar weight standards have increased from 263,000 pounds to 286,000 pounds, and some cars now weigh as much as 315,000 pounds; however, approximately 45 percent of Class II and III railroad lines are not equipped with track capable of handling 286,000 pound cars, according to ASLRRA. In addition, freight cars have increased in height as increased intermodal freight traffic has led to double-stacking intermodal containers on railroad cars. Some bridges and tunnels do not have the clearance needed to accommodate these double-stack intermodal trains. The majority of the freight railroad network is privately owned, and federal economic regulation of freight railroads has decreased since the federal government deregulated the railroad industry in 1980. All seven Class I railroads are privately owned, and according to ASLRRA, approximately 95 percent of Class II and III railroads are privately owned, with the rest owned by government entities. Private railroads have an incentive to maintain their infrastructure in order to maintain business operations, and most railroads privately finance their infrastructure maintenance and improvement projects. Railroads invest large amounts in fixed assets such as track, signals, bridges, and tunnels. The Association of American Railroads (AAR) estimates that in calendar year 2006 Class I railroads alone invested over $8 billion in “capital commitments,” that is, expenditures for capital projects and operating leases. Compared with other industries, railroads invest a higher percentage of revenue in their infrastructure. For example, in 2000, the average U.S. manufacturer spent 3.7 percent of revenue on capital spending, while railroads spent 17.8 percent—almost five times as much, according to an analysis of U.S. Census data prepared by the American Association of State Highway and Transportation Officials (AASHTO). As railroads take steps to increase their capacity—by increasing the size or weight of railroad cars or by adding track—some of their bridges and tunnels may require alterations. A bridge’s configuration and condition dictates weight restrictions, and most bridges and tunnels cannot accommodate the additional track, if needed, without replacement or significant reconstruction. Similarly, the dimensions of some bridges and tunnels restrict railroad car height and width. Because bridges and tunnels are the most expensive pieces of railroad infrastructure, with replacement and construction costs ranging from 11 to 550 times as much per linear foot as regular track, capacity expansion projects involving bridge and tunnel work require significant capital investment. While the freight railroad industry is projected to grow substantially with expected increases in freight traffic, the industry’s ability to fund this projected growth, including making needed capital infrastructure investments in railroad bridges and tunnels, is largely uncertain. For private companies seeking to maximize returns to stakeholders, railroad investment poses a substantial risk. A railroad contemplating an infrastructure investment must be confident that the market demand for that infrastructure will hold up for 30 to 50 years. Furthermore, while railroads own and maintain their own infrastructure, some other modes of transportation, such as the trucking and maritime barge industries, use infrastructure that is owned and maintained by the government, providing them with a competitive price advantage over railroads. We have previously reported that railroad investment is critical to freight mobility and economic growth, and investments in railroad projects can produce public benefits, such as (1) reducing highway congestion, (2) strengthening intermodal connections and the efficiency of the publicly owned transportation system, and (3) enhancing public safety and the environment. (See the list of related GAO products at the end of this report.) However, even when the public benefits of freight projects may be sufficient to warrant public funding, federal funding mechanisms may not be well tailored to freight projects. Whereas freight projects are frequently intermodal, most federal funding mechanisms are focused on one mode. In addition, freight projects generate private benefits, raising questions about whether and how to provide public support for them. Major railroads collect and maintain detailed information on the condition of their bridges and tunnels and on the extent to which these structures contribute to network congestion, but less is known about how much information Class II and III railroads collect. Freight railroads generally consider this information proprietary, citing concerns over security and liability, and they selectively share bridge and tunnel information with the government. Meanwhile, the federal government plays a limited role in collecting information on railroad bridges and tunnels because they are privately owned and maintained. In addition, FRA has no regulations or standards for railroad bridges and tunnels; and, in FRA’s view, the safety benefits that might accrue from collecting and maintaining information on their condition would not justify the expense. Various other federal agencies collect some information on railroad bridges and tunnels that pertain to their mission. While most bridges and tunnels are not the main cause of freight railroad congestion, some structures are chokepoints and do constrain capacity. Freight railroads set maintenance and investment priorities by considering bridge and tunnel information, together with comparable information on other components of their network infrastructure, and identify those repairs and improvements that will improve safety, provide the highest return on investment, and increase capacity. A bridge or tunnel is likely to cost more to repair—and much more to replace—than other components of railroad infrastructure networks, such as track or signals. As a result, railroads of all classes are more likely to invest in other components sooner and to consider extensive bridge or tunnel repair or replacement as one of their last investment options. Class I railroads, which own over 75 percent of U.S. railroad bridges and over 800 tunnels, maintain detailed information on the condition of their bridges and tunnels and generally have the resources to invest in a robust maintenance and inspection regime; however, less is known about the information Class II and III railroads collect on bridge and tunnel conditions, according to FRA’s Chief Structural Engineer. Officials from five of six Class I railroads with whom we spoke said they maintain bridge and tunnel information electronically in databases—including data on location, age, and other characteristics of the structures; inspection reports; condition information, maintenance histories, design drawings or construction documents; and other pertinent information. While Class I railroad bridge departments vary in size, these departments all have in- house bridge inspectors, engineers, and maintenance-of-way crews that conduct inspections, carry out maintenance and repair activities, and may also design and construct bridges. Class I railroads use in-house bridge inspectors to conduct inspections at least once a year on all bridges and tunnels to monitor safety and assess current conditions. For example, one Class I railroad we interviewed has over 100 personnel dedicated to bridge inspections on their network. According to the limited data we have, Class II and III railroads collect and maintain less information on their bridges and tunnels, and the reliability of the data collected may be poor. Based on our discussions with two Class II and nine Class III railroads, and on the documentation of 43 bridge safety surveys of Class II and III railroads that FRA completed from January 2004 through March 2007, Class II and III railroads collect less information on the condition of their bridges and tunnels, generally contract out bridge and tunnel inspection and repair work, and have less in-house bridge expertise. For example, 18 of the 43 Class II and III railroads reviewed by FRA since January 2004 could not produce some critical documentation related to the safety of their bridges, including past bridge inspection reports, design documents, or complete bridge inventories. Furthermore, only 16 of 43 Class II and III railroads, surveyed by the FRA inspect their bridges at least once a year. Also, according to FRA officials, many Class II and III railroads lack the in-house bridge expertise to conduct their own bridge inspections and rely instead on outside consultants. For example, according to the 43 FRA bridge safety surveys of Class II and III railroads, 26 of the railroads contracted out bridge inspections, 7 did not conduct bridge inspections, 4 did not mention who conducted the railroad’s bridge inspections, 4 conducted inspections in-house, 1 had an informal inspection arrangement, and 1 was found to have no bridges. In addition, 8 bridge safety surveys provided to us by FRA either found inconsistencies between bridge inspection reports and actual bridge conditions or found insufficient detail in inspection reports. One Class III railroad representative with whom we spoke stated that the true condition of that railroad’s bridges, all of which were built by railroads not in existence today, is unknown because the railroad does not have design or construction documents, lacks past maintenance and inspection records, and has never conducted a complete engineering study to determine its bridges’ load-carrying capacity. FRA officials stated that, based on the limited data they have, they believe that some Class III railroads do not have the training or experience needed to recognize critical structural deficiencies or even understand the severity and urgency of identified bridge or tunnel defects. However, FRA officials also stated that some Class II and III railroads have very good bridge management practices because they use qualified outside consultants to perform safety and inspection processes. The federal government’s efforts to collect data on railroad bridges and tunnels are limited in scope, and the data are not updated regularly. FRA collects railroad traffic information and maintains geographic data on U.S. freight railroad lines; however, this information does not show the location of bridges or tunnels on these routes. FRA maintains records of railroad accident and incident reports, some involving bridges and tunnels, dating back to 1982, but the information collected is limited to accident descriptions, repair costs, structure locations, and information about the train, crew, and track involved in the accidents and does not show bridge or tunnel condition, age, structure type, or design documents. In addition, as part of the Railroad Rehabilitation and Improvement Financing (RRIF) loan application process, FRA’s Office of Railroad Development hires independent engineering firms to verify the condition of the infrastructure and the feasibility of proposed infrastructure improvements. These assessments may provide detailed information on specific railroad infrastructure, including bridges and tunnels; however, the data are limited to the projects submitted in the RRIF loan application process. Furthermore, while FRA collects and updates data on track defects from its track inspections, it collects less information on bridges and tunnels, because the FRA has regulations detailing track standards but only guidelines for bridges. Although FRA has authority to obtain records related to the safety of railroad operations, including those involving bridges and tunnels, FRA officials expressed concern about the agency becoming a repository for railroad bridge and tunnel data. In addition, FRA’s Chief Structural Engineer stated that the expense of collecting and maintaining a comprehensive railroad bridge and tunnel inventory could not be justified from a safety standpoint because railroads already maintain inventories of their own bridges and tunnels, which FRA officials review. No comprehensive inventory exists on the nation’s railroad bridges and tunnels; however, through unrelated initiatives over the years, FRA has obtained some information on bridges and tunnels, although, in some cases, this information has not been updated regularly. For example, in 1993, FRA compiled a list of railroad bridges over navigable waterways based on data from the U.S. Coast Guard. However, the list has not been regularly updated. Other federal agencies collect some information on railroad infrastructure as it pertains to their mission, but this information is not comprehensive or exclusive to railroad structures. This information is mainly collected by Department of Defense (DOD), DHS, TSA, the Coast Guard, the Army Corps of Engineers, and the Environmental Protection Agency and centers on either security or construction permitting functions. While railroad bridges and tunnels are aging, their condition is not the main cause of freight railroad congestion; however, some critical bridges and tunnels are chokepoints on the freight railroad network. According to FRA officials and railroad representatives with whom we spoke, many of these structures are reaching or have exceeded their originally estimated useful life. For example, an FRA bridge survey completed in 1993 found that more than half of the nation’s railroad bridges were built before 1920 and, according to FRA’s Chief Structural Engineer, very few railroad tunnels have been built within the last 50 years. As a bridge ages, it undergoes natural deterioration, including corrosion, and weather- related stresses. In addition, fatigue may occur in some components of older bridges because of stress resulting from repeated heavy freight train operations. FRA’s Chief Structural Engineer told us that, as bridges and other components of railroad infrastructure age and their condition worsens, the railroads may need to increase their investment in inspection, maintenance, and replacement to keep existing railroad lines serviceable. One Class I railroad representative said his railroad has a growing inventory of about 300 to 400 older bridges that are deteriorating and therefore need additional inspections and assessments. Quantifying the future maintenance and replacement needs of the freight railroad network is difficult, since private railroads do not make information on the condition of railroad bridges and tunnels publicly available because of concerns over sharing proprietary information and losing competitive advantage. However, the American Society of Civil Engineers gave railroad infrastructure a “C-” grade in its 2005 assessment of the nation’s infrastructure, noting that limited capacity on the freight railroad network has created significant chokepoints and delays. Although officials at a few railroads with whom we spoke expressed some concerns about the effect of aging bridges on congestion, they were more concerned about the effect of increased train traffic on congestion. Demand for freight railroad capacity has increased over the last decade with some Class I railroads reaching record traffic levels, especially in ethanol, coal, and intermodal traffic. The demand for such capacity is expected to continue increasing. For example, the DOT has projected a 55 percent increase in freight railroad traffic from 2000 to 2020. Increased train traffic places additional stress on existing infrastructure, especially railroad bridges; requires capacity expansion investments in rolling stock, infrastructure, and personnel; and increases congestion on the railroad network. Class I railroads consider congestion a networkwide problem whereas officials of the Class II and III railroads with whom we spoke said they generally experience congestion around crossings, yards, and interchanges with Class I railroads. Although officials from four of the nine Class II and III railroads with whom we spoke said they currently experience congestion on their entire networks, generally, those railroads were more concerned about upgrading existing infrastructure to handle the heavier railcars and longer trains being demanded by Class I railroads than they were with increasing capacity. The American Short Line and Regional Railroad Association estimates that out of the 48,000 miles of track owned by Class II and III railroads, 20,000 to 25,000 miles need to be upgraded to handle the heavier railcars that are becoming the industry standard. ASLRRA estimated these upgrades would cost $7 billion to $11 billion. Officials at seven of the nine Class II and III railroads with whom we spoke said the railroads had completed or needed to complete track or bridge upgrades to accommodate heavier railcars. Several factors contribute to congestion on freight railroad networks, including grade crossings and passenger trains, both of which can decrease freight railroad capacity and cause freight train delays. Bridges or tunnels may also cause network congestion. For example, single-track bridges and tunnels constrain capacity on double-track lines, as do low clearances that do not accommodate double-stack intermodal trains, bridges that open for marine traffic, and other structural characteristics such as sharp curves and steep grades that require slower train speeds. Deteriorated bridge and tunnel conditions can also contribute to congestion by requiring reduced train speeds, closures, and increased time out of service for maintenance. Where repairs or improvements to bridges and tunnels may not be financially viable or sufficiently profitable, railroads may institute slow orders or shut down lines and reroute traffic. In some cases, especially for Class III railroads, a bridge or tunnel closure can isolate a shipper and cripple a railroad’s entire network. Although FRA officials estimated that 10 percent or less of freight railroad congestion is attributable to capacity constraints caused by railroad bridges and tunnels, railroad officials whom we spoke with identified some key bridges and tunnels as chokepoints on their networks. For example, one chokepoint is a moveable bridge that is one of only a few bridges across the Mississippi River owned by a Class I railroad. According to railroad officials, during peak periods, the bridge must open up to 15 times per day for river traffic while accommodating between 65 and 70 trains per day. Each opening for river traffic generally takes an average of 25 to 30 minutes, although the bridge is sometimes open for more than an hour, causing train delays as far as the West Coast. In addition, this bridge is closed for routine maintenance for over an hour several times a week. Another chokepoint is the 1.7 mile Howard Street Tunnel (see fig. 3), constructed in 1895 under downtown Baltimore, Maryland, which is the largest and most expensive obstacle to transporting double-stack railcars from Baltimore to Chicago. The tunnel regularly causes passenger and freight train delays in the Baltimore area and beyond because it is a single-track tunnel with insufficient clearance for double- stack railcars on a double-track main line. Grades in and curves near the Howard Street tunnel also contribute to congestion, constraining freight traffic to 25 miles per hour through the tunnel. In addition, during a fire in the tunnel in 2001, freight traffic was rerouted, resulting in 18- to 36-hour delays. Freight railroad officials with whom we spoke consider information on bridge and tunnel conditions and congestion, along with information on demand, cost, and other factors, to set infrastructure maintenance and investment priorities. According to all of the Class I railroad officials with whom we spoke, maintaining or increasing safety is one of their highest investment priorities, along with return on investment. Hence, most Class I railroad officials with whom we spoke said the railroads consider immediate safety concerns first, ongoing maintenance and asset replacement next, and capacity expansion last when prioritizing bridge and tunnel projects. Bridge and tunnel rehabilitation or replacement is expensive, and the costs are highly variable, depending on the complexity of the structure’s design, the length and location of the structure, the construction materials, and the type of replacement structure. The cost of replacing a bridge can range from $600,000 for a small timber trestle bridge on a lightly trafficked Class III railroad line to $100 million to replace a large steel bridge with a 2,500- foot moveable span located on a Class I railroad’s main line. See appendix II for more examples of railroad bridge and tunnel costs. Because replacement costs are high, railroads prefer to use asset extension programs and replace components rather than replacing entire structures to address deterioration and extend the useful life of their bridges and tunnels. Often, an individual component of a bridge may deteriorate faster than other components; therefore, replacing the component could significantly extend the life of the entire bridge. Bridge and tunnel replacement is typically one of the last options railroads choose to address infrastructure deterioration and mitigate congestion. Railroads typically try to improve their processes before enhancing infrastructure to mitigate congestion. Process improvements and other strategies generally cost less and are more cost effective than infrastructure enhancements. Class I railroads have used a number of process improvements to mitigate congestion, including updating their operating plans to reflect changes in business volume and traffic mix, increasing train lengths and the number of fully loaded cars per train, double-stacking trains, decreasing car cycle times, increasing service, hiring more train crews, and using pricing strategies to shape demand. When process improvements can no longer reduce congestion, railroads use infrastructure enhancements to expand the capacity of their networks. Infrastructure enhancements include adding sidings or track, expanding yards and terminals, upgrading signal systems, and rehabilitating or replacing bridges and tunnels. Per linear foot, bridge and tunnel replacement costs more than other infrastructure improvements, as shown in figure 4. Moreover, according to several Class I railroad representatives with whom we spoke, bridge replacement typically has a lower return on investment than other infrastructure improvements. Consequently, railroads invest in other enhancements before rehabilitating or replacing bridges. While bridge and tunnel work is expensive for all freight railroads, railroads vary in their ability to make these investments. Class I railroads generally have more resources than Class II and III railroads to invest in bridge and tunnel inspection, maintenance, rehabilitation, and replacement. According to AAR, in 2006, the seven Class I railroads spent an average of $1.2 billion each for capital investments, while all the Class II and III railroads surveyed by ASLRRA spent an average of over $795,000 each in 2004. Class II and, to a greater extent, Class III railroads face challenges in funding bridge and tunnel rehabilitation or replacement efforts because they may have limited funds, lack in-house bridge and tunnel expertise, and own bridges and tunnels purchased from Class I railroads on lines that those railroads had disinvested in. When repairs or improvements to bridges or tunnels are not financially feasible for Class II or III railroads, the railroads may instead modify their operations—by, for example, reducing train speeds over bridges or in tunnels. According to ASLRRA, some railroads may even stop operating on routes when bridge or tunnel repairs are both unavoidable and unaffordable. As a result, according to FRA officials, fewer serious problems are found on bridges and in tunnels owned by Class I railroads than on bridges or in tunnels owned by smaller railroads. Nonetheless, in response to several accidents caused by bridge failures, near accidents involving bridges, and results from its bridge safety surveys, FRA is developing a formal rail safety advisory on railroad bridges, to be released in late 2007, that will urge all railroads to increase their attention on bridge safety and bridge management programs. Freight railroads are responsible for the structural safety of their bridges and tunnels; moreover, the federal government does not regulate railroad bridge and tunnel inspection requirements or conditions. In 1995, after determining that railroads were already inspecting bridges according to detailed industry standards, FRA decided to issue advisory guidelines for railroad bridge management instead of regulations. Because FRA has general authority over railroad infrastructure safety, it may make observations of and assess bridge and tunnel conditions, but it does not routinely inspect these structures to monitor their condition. FRA bridge specialists may make observations while investigating complaints, following up on track inspectors’ concerns, and conducting bridge safety surveys. If an FRA bridge specialist determines that there is a safety problem, FRA attempts to work cooperatively with the railroad to correct the problem rather than shut down the railroad’s operations. FRA has taken enforcement action to protect public safety when there is a documented problem of immediate concern over a structure’s stability. Other federal agencies also have limited roles in railroad bridge and tunnel safety. FRA’s bridge safety oversight has evolved; however, bridge specialists individually apply different criteria in their selection of railroads for bridge safety surveys. FRA has not established a systematic, consistent risk-based approach to selecting Class II and III railroads for bridge safety surveys. As a result, FRA may not be selecting the railroads whose bridges or tunnels are most likely to present safety issues. Historically, the federal role in railroad bridge and tunnel safety has been narrow. The federal government does not routinely inspect railroad bridges or tunnels and does not regulate their condition. After a highway bridge collapsed in 1967, Congress debated instituting bridge inspection standards that would apply to railroad bridges, but railroads were already inspecting their bridges according to their established industry standards. In 1968, Congress required national inspection standards for highway bridges; however, current law does not regulate railroad bridge conditions or establish inspection standards. Under the authority originally granted to it by the Railroad Safety Act of 1970 to issue safety regulations as necessary, from 1975 to 1981 FRA considered establishing bridge safety regulations based on industry standards created by the American Railway Engineering and Maintenance of Way Association. However, according to FRA, these standards are actually recommendations for a thorough bridge management program, including very detailed specifications for particular types of bridges, rather than minimum inspection standards. In light of the industry’s detailed safety standards and the low frequency of accidents caused by structural conditions on bridges or in tunnels, FRA determined that regulating bridge or tunnel structural conditions or requiring inspections would not be cost-effective to FRA when considering the cost of implementation and enforcement. Additionally, while establishing minimum standards might improve some railroads’ structural management policies and procedures, it could also influence some railroads to reduce the frequency or effectiveness of their inspections. FRA observes and assesses bridge and tunnel conditions, but does not inspect these structures to regulate their condition. Although FRA does not regulate bridge and tunnel conditions, it does regulate track conditions, and it uses track inspectors, as well as bridge specialists, to identify potential bridge and tunnel safety issues. Historically, FRA track personnel have overseen bridge and tunnel safety. Under the authority originally granted by the Federal Railroad Safety Act of 1970, an FRA track inspector may take action to address a structural concern identified on a bridge or in a tunnel, such as a visible crack in a steel beam, to ensure the safety of the public and railroad employees. Additionally, in 1992, FRA’s Office of Safety established the position of Bridge Engineer (currently filled by FRA’s Chief Structural Engineer) to assist track personnel in identifying and resolving issues of bridge structural integrity and to oversee standards regulating the safety of railroad bridge workers. After completing a bridge survey in 1993, FRA concluded that most railroads were inspecting bridges to a higher standard than would be required by any FRA-issued minimum standards, which prompted FRA to issue guidelines for bridge management rather than regulations. In 1995, FRA began implementing these guidelines as part of its Bridge Safety Assurance Program. FRA has hired five full-time bridge specialists since 2000 to implement this program. These specialists provide expertise to track personnel and work with them to relieve some of the track personnel’s inspection workload related to railroad structures as well as carry out other activities to promote bridge safety. Besides the Chief Structural Engineer, the program now includes one bridge specialist at FRA headquarters and four bridge specialists in the field. Each field bridge specialist is responsible for all of the passenger and freight railroad infrastructure in two FRA regions and one or two Class I railroads (whose infrastructure usually spans multiple FRA regions). In addition to addressing bridge structural concerns, FRA bridge specialists address tunnel structural concerns. However, FRA’s involvement in tunnels is not as extensive as its involvement in bridges, since bridges are more affected by stress from trains moving over them than tunnels are from trains moving through them. In addition, there are many more railroad bridges in the United States than there are tunnels. In observing bridge conditions, FRA bridge specialists use FRA advisory guidelines for railroad bridge management programs. These guidelines recommend, among other things, that organizations responsible for the safety of a bridge ensure that a qualified engineer determines the weight- bearing capability of a bridge; collect bridge design, construction, maintenance, and repair records; and have a competent inspector periodically inspect structures. The guidelines do not pertain to tunnels or other types of structures on railroad property. FRA encourages, but does not require, that railroads comply with these guidelines because the railroads are responsible for inspecting, maintaining, and ensuring the safety of bridges and tunnels that carry their track. However, when a bridge or tunnel owner fails to resolve a structural problem, FRA can use legal means, including emergency orders, to ensure safety. FRA is the primary federal agency responsible for overseeing the safety and structural integrity of railroad bridges and tunnels. FRA bridge specialists perform both enforcement and nonregulatory activities aimed at ensuring the safety of railroad structures. Other federal agencies have more limited roles in railroad bridge and tunnel safety related to their particular missions. FRA bridge specialists play a number of roles intended to promote bridge and tunnel safety, most of which involve responding to identified safety issues. One of their principal roles is to alert FRA’s Chief Structural Engineer when they encounter an immediate bridge or tunnel safety concern so that an emergency order may be issued if necessary. These safety concerns may be identified in response to a track inspector’s findings, in response to an accident or a complaint, or through independent observation of a railroad’s bridges or tunnels. Each bridge specialist has numerous safety responsibilities as part of the Bridge Safety Assurance Program. In particular, the FRA bridge specialists are involved in the following activities: Enforcement. If a bridge specialist notices a track defect on or near a bridge or tunnel, the specialist typically first recommends remedial actions, such as a reduction in train speeds over the affected track segment. If conditions warrant, the FRA Administrator may issue an emergency order. However, FRA prefers to seek cooperative solutions with railroads and has issued only three emergency orders for bridges and none for tunnels since 1970. Accident Investigation. When an accident occurs on a bridge or in a tunnel, one or more bridge specialists may conduct an on-site investigation. In the case of a bridge or tunnel structural failure, the bridge specialist may identify the individual component that caused the failure, although the entire structure may need to be replaced after the accident (see fig. 5). Complaint Investigation. Bridge specialists are responsible for addressing and investigating almost all formal complaints concerning bridges and tunnels filed by the general public, Members of Congress, and railroad employees. According to FRA, most formal bridge complaints from the public are related to aesthetic issues rather than the stability or safety of a structure. Bridge specialists may also conduct structural evaluations in response to concerns identified by FRA track personnel or as part of a complaint investigation. Monitoring Compliance Agreements. In response to systemic safety concerns that FRA identifies on a railroad through the bridge specialists’ or track personnel’s activities, FRA may work with the railroad to implement a compliance agreement to improve safety across the entire railroad. FRA often initiates a compliance agreement to avoid issuing an emergency order for the railroad to cease operations on a bridge. FRA has found that compliance agreements can be an effective tool to address systemic weaknesses in a railroad’s bridge management practices, while emergency orders usually address serious safety problems on specific bridge structures. Training. At FRA conferences, the bridge specialists teach FRA track inspectors about bridge conditions. This training supports communication between FRA track staff and bridge specialists and is designed to increase the number of FRA personnel that can detect immediate safety concerns on bridges. Conducting Bridge Safety Surveys. During a bridge safety survey, a bridge specialist interviews railroad bridge staff and uses FRA guidelines as criteria for reviewing a railroad’s bridge management policies, procedures, and records. After reviewing the railroad’s records and policies, the bridge specialist observes a sample of the railroad’s bridges and compares the results of the sample observation with the railroad’s bridge inspection reports to determine the inspection reports’ reliability. The bridge specialist documents the findings and follows up with the railroad to document any necessary repairs to structures or improvements to bridge management procedures. Besides FRA, several federal agencies have responsibilities related to railroad bridges and tunnels in areas such as security and clearance for maritime traffic. Within DHS, TSA has issued freight railroad security action items in cooperation with the railroad industry, but compliance with these action items is voluntary. Much as FRA monitors compliance with its guidelines, TSA security inspectors assess a railroad’s compliance with TSA’s action items and may make recommendations if the railroad does not comply with certain items. Additionally, TSA issued a proposed rule in December 2006 that would require freight railroads and other transportation entities to allow TSA and DHS to enter, inspect, and test property, facilities, and records relevant to railroad security. Also within DHS, the U.S. Coast Guard is responsible for overseeing all bridges over navigable waterways and for assessing obstructions to maritime traffic. The Coast Guard regulates movable bridge schedules and prescribes bridge lighting for navigational safety. Within the DOD, the Transportation Engineering Agency designates STRACNET, a network of railroad lines that form the minimum railroad network required to meet the transportation needs of the military. The Transportation Engineering Agency does not directly oversee the condition of bridges or tunnels on this network. FRA’s field bridge specialists monitor bridges and tunnels in a large area and have not been able to assess the bridge policies or the bridges and tunnels of many of the Class II or Class III railroads in the specialists’ assigned areas. Furthermore, as previously discussed, the railroads share information on the condition of their bridges and tunnels with the federal government selectively. As a result, the structural conditions of some bridges and tunnels and the practices used to inspect and maintain them, particularly on Class III railroads, are largely unknown to the federal government. According to ASLRRA, there are 549 Class II and III railroads in the United States. Although FRA has conducted bridge safety surveys on all of the Class I railroads, FRA officials estimate that they have conducted, on average, approximately 25 to 35 bridge safety surveys per year on Class II and III railroads since the introduction of the field bridge specialists in 2004. As we mentioned earlier, our analysis of FRA’s completed bridge safety surveys during this period showed that some of the surveyed Class II and III railroads had sound bridge management practices and records, but most did not. The limited number of bridge safety surveys that the FRA bridge specialists have been able to accomplish relative to the number of Class II and III railroads could indicate potential bridge and tunnel safety concerns on railroads that FRA has not surveyed. According to FRA, the goal of the Bridge Safety Assurance Program is not to monitor all railroads, but rather to identify railroads whose bridge management policies and bridge conditions may lead to safety threats. However, the FRA bridge specialists do not select Class II and III railroads for bridge safety surveys using a consistent methodology based on a comprehensive, prioritized assessment of safety issues that could focus FRA’s inspection and enforcement resources on those railroads that could have the greatest safety risks. Each field bridge specialist uses individually developed criteria, based on personal experience and other available information--such as whether a railroad’s bridges carry passenger traffic-- to help identify Class II and III railroads as candidates for bridge safety surveys. This is in contrast to how FRA implements its National Inspection Plan to target inspections of other railroad safety areas. This plan provides guidance to each FRA regional office on how its inspectors should divide their work, by railroad and by state, on the basis of trend analyses of available accident, inspection, and other data. Before implementing this plan, FRA had a less structured, less consistent, and less data driven approach to planning inspections, under which each region prepared its own inspection plan, on the basis of judgments and available data. The use of data was not consistent from region to region, and individual inspectors had greater discretion to select sites for inspection using their own knowledge of their inspection territories. In our previous work, we have noted that risk management can help to improve safety by systematically identifying and assessing risks associated with various safety hazards, prioritizing them so that resources may be allocated to address the highest risk first, and ensuring that the most appropriate alternatives to prevent or mitigate the effects of hazards are designed and implemented. FRA’s safety oversight role in other areas, such as operating practices and track, includes inspections that focus on compliance with minimum standards; however, these inspections do not attempt to determine how well railroads are managing safety risks on their systems. In contrast, by examining how railroads manage safety risks during its bridge safety surveys, FRA is, in part, addressing risk- management issues, even though it has not established a systematic, risk- based methodology to select Class II and III railroads that may need additional oversight. For example, one bridge specialist is contacting all Class III railroads in one region to obtain specific information on their bridge management policies, such as whether a railroad has regular inspections by a qualified civil engineer and how the railroad records and uses the bridge inspection data, to better identify railroads for bridge safety surveys. Additionally, FRA’s Chief Structural Engineer is considering a research project that would use new technology to measure the stress trains inflict on timber bridges. If this project were implemented, FRA would analyze stress data that might indicate bridge problems and a need for monitoring problematic bridges. Federal, state, and local governments make limited investments in freight railroad infrastructure, including bridges and tunnels, in an effort to enhance the public benefits associated with freight and passenger transportation. However, federal investments in all modes of freight- related infrastructure are not aligned with a national freight policy or with a strategic federal freight transportation plan. DOT has developed a draft Framework for a National Freight Policy, but it lacks a strategic federal component that specifies federal goals, roles, and revenue sources and funding mechanisms. In contrast, some states structure their investments in freight railroad infrastructure to produce public benefits at the state and local levels, and some public-private partnerships have facilitated investments designed to produce public and private benefits. Freight congestion and demand are expected to increase, and given the highly constrained fiscal environment, the federal government may be challenged to increase the efficiency of the national multimodal freight transportation system. While the private sector is largely responsible for investing in the freight railroad infrastructure that it owns and maintains—an estimated $9 billion during calendar year 2006—the federal government invests some public funds in this infrastructure as well—an estimated $263 million during fiscal year 2006. The federal government funds freight railroad infrastructure investments through the General Fund and the Highway Trust Fund, and funding mechanisms include loans, grants (such as formula grants and legislative earmarks), and tax expenditures (such as tax credits). However, these funding mechanisms are (1) targeted toward individual transportation modes and address different transportation safety and economic issues, (2) are administered by different agencies that have different missions, and (3) are not coordinated by a strategic federal multimodal freight transportation policy to maximize specific national public freight transportation benefits (see table 1). For example, in accordance with its mission to protect maritime economic interests, the U.S. Coast Guard administers the Truman-Hobbs program to alter railroad and highway bridges that obstruct maritime traffic (see fig. 6). While this program can enhance maritime, railroad, and highway freight mobility, it is targeted toward maritime traffic and is not coordinated with other DOT freight mobility investments. Today’s federal investments in freight railroad infrastructure are not guided by a clear federal freight strategy. In 2006, DOT attempted to move beyond the traditional modal approach to freight transportation by developing a draft Framework for a National Freight Policy, which, among other things, incorporates some previously established federal freight railroad infrastructure funding mechanisms. Although this draft Framework represents an important step toward developing a national intermodal freight transportation policy, it does not go far enough, in our view, toward delineating a clear federal role and strategy for carrying out that policy. DOT describes its draft Framework as a living document and emphasizes that the nation’s freight transportation challenges are of such a nature and magnitude that governments at all levels and the private sector must work together to address them. We agree, and we note that as the draft Framework evolves, DOT and other stakeholders will have an opportunity to clarify their respective freight strategies. As we have reported, the federal approach to a given transportation strategy should include clearly and consistently defined goals, roles, revenue sources, and funding mechanisms to ensure that federal investments in the nation’s intermodal freight transportation infrastructure will maximize national public benefits. DOT’s draft Framework sets forth some “objectives” for freight transportation, together with strategies and tactics for achieving them; acknowledges that a variety of public and private stakeholders play important roles in freight transportation; and identifies some funding mechanisms and other tools that the federal government can use to support freight infrastructure. However, in some instances, these objectives are vague, and federal and other stakeholders’ roles and funding mechanisms are not clearly and consistently defined. For example, one DOT draft Framework objective is to “add physical capacity to the freight transportation system in places where investment makes economic sense,” with supporting strategies and tactics that include focusing on facilitating regionally based solutions for freight gateways and projects of national or regional significance and utilizing and promoting new and expanded financing tools, such as RRIF, to incentivize private sector investment. To implement this objective, DOT would need to define “economic sense” and develop criteria—as the draft Framework says—to identify specific freight gateways and projects of national or regional significance; and determine whether federal revenues should be used to help subsidize any project components and, if so, which federal funding mechanisms would be most appropriate. As we have also reported, federal investments should be directed to maximize national public benefits. Allocating benefits and their costs among beneficiaries is difficult and may be subject to interpretation. Hence, it will be important for DOT to define national benefits and to establish criteria for determining whether federal investments are warranted. DOT’s draft Framework suggests, but does not explicitly identify as such, certain criteria for federal investment, such as a project’s national or regional significance, opportunities to incentivize more private investment in transportation infrastructure, and opportunities to leverage private and other public funds to add freight transportation capacity. Without a federal freight strategy, the existing federal freight funding mechanisms are not designed to maximize national public benefits. For example, although all railroads may apply for RRIF loans, the only freight railroads that have been awarded loans have been Class II and III railroads, whose operations tend to be more regional and local. Also, the Federal Highway Administration’s (FHWA) Section 130 grant program mainly benefits localities by improving or eliminating railroad-highway grade crossings and the public safety benefits of the program are more local than national. Benefits from the Truman-Hobbs program’s investments directly accrue primarily to private maritime shipping and secondarily to railroad companies by improving each mode’s infrastructure, thereby enhancing the efficiency of freight transportation. On the other hand, depending on the project, legislative earmarks can generate public and private benefits that could be national, regional, and local in scope; however, these projects do not compete for funding against other alternatives. For example, through the Projects of National and Regional Significance program, Congress earmarked funds to support the Chicago Region Environmental and Transportation Efficiency (CREATE) project, which is mainly designed to reduce railroad congestion in the nation’s largest railroad hub—the effects of which, among other things, could improve the mobility of the national freight railroad network, improve local commuter railroad service, and reduce railroad-highway grade crossing hazards and congestion. Finally, Class II and III railroads can use the Railroad Track Maintenance Credit—a tax credit—to offset capital investment expenditures, but as previously stated, individual Class II and III railroad operations tend to benefit the private and local sectors more than the nation as a whole. In contrast to the federal government, some states that invest in freight railroads administer various goal-oriented and criteria-based programs that are funded through a mixture of state and federal resources specifically to produce anticipated state and local benefits. Some states have been helping short line railroads maintain track in their jurisdictions for almost 20 years. For example, the Tennessee DOT provides approximately $8 million in grants annually to 18 of 20 Class III railroads in the state to fund track and bridge work, including bridge inspections and rehabilitation projects. As we have previously reported, governments at all levels—including states—have increasingly been providing support for freight railroad improvement projects that offer potential public benefits, and over 30 states have published freight plans that describe their goals and approach to freight-related investments. The scope of state- administered freight railroad programs includes railroad infrastructure improvements, construction of intermodal facilities, elimination of public railroad-highway grade crossings, and inspection of bridges. For example, the Pennsylvania DOT administers a matching grant program—funded at $10.5 million as of October 2006—to support freight railroad maintenance and construction costs; and eligible recipients include freight railroads, transportation organizations, municipalities, municipal authorities, and other eligible users of freight railroad infrastructure. Officials from three of the nine state DOTs whom we interviewed are developing and implementing multimodal freight policies. However, such initiatives may be limited by state and federal funding criteria that restrict most state transportation spending to highway infrastructure. As we have reported, efforts to improve freight mobility are hampered by the highly compartmentalized structure and funding of federal transportation programs—often by transportation mode—that gives state and local transportation agencies little incentive to systematically compare the trade-offs between investing in different transportation alternatives to meet mobility needs because funding is tied to certain programs or types of projects. Officials from several state agencies and oversight organizations whom we interviewed stated that funding available for freight projects, regardless of mode, would be more useful than “stovepiped” funding that would be available only for investment in certain transportation modes. Officials at six of the state agencies and oversight organizations whom we interviewed administer freight railroad programs that have identified programmatic goals, eligibility criteria, and funding sources aimed at generating state and local benefits. For example, officials from the Kansas DOT told us that the goals of its loan program for local and regional railroads are to improve railroad lines, enhance railroads’ customer service to shippers, limit the number of trucks on highways, and increase state and local economic vitality by transporting local agricultural products. While officials from some state agencies that we interviewed acknowledged that public benefits are difficult to quantify for any public investments, six state agencies and oversight organizations we interviewed were trying to quantify them. For example, the Kansas DOT sponsored a study which found that the short line railroad system saves the state an estimated $49 million annually in pavement damage costs. The scope of state freight railroad programs may be either broad, including infrastructure investments of all kinds for railroads of all sizes, or narrow, focusing on eligible projects and award recipients. For example, the Pennsylvania DOT has two broad grant programs for freight railroads and shippers, both of which may be used to fund maintenance and new construction projects. In contrast, the Tennessee DOT makes funds available specifically to Class III railroads by allocating funds for track and bridge rehabilitation. State freight railroad initiatives have supported investments in track rehabilitation and other infrastructure improvements, railroad acquisition and line preservation assistance, intermodal facility construction and increased industrial access to railroads, and road and railroad-highway crossing safety enhancements. Some of the state entities we interviewed reported using a number of funding mechanisms for their freight railroad programs. Specifically, 6 of the 12 said they provide grants and long-term below-market rate loans, and one state reported issuing tax-exempt bonds. Some of these states require that entities applying for loans or grants secure matching funds. States fund freight railroad programs through state general funds, user fees, federal Section 130 and other grants, and other sources. Some states have taken an innovative approach to funding freight railroad infrastructure. For example, Tennessee created a user-fee based Transportation Equity Fund to support investments in nonhighway infrastructure, including short line freight railroad track and bridge rehabilitation. The fund is financed through the revenue from state sales taxes on diesel fuel paid by railroad, air, and water transportation modes; and the portion available for the Tennessee Short Line Railroad Rehabilitation Track and Bridge grant program is typically $7 million to $8 million annually. The program’s purpose is to preserve freight railroad service and thereby contribute to the state’s economic development. Construction grants are funded at a 90 percent state and 10 percent local (nonstate) matching share. Each grant can be matched with in-kind work, cash contributions or both. States, localities, and railroads have used public-private partnerships as a strategic approach to develop freight-related transportation solutions that benefit both sectors. In using this approach to resolve freight issues, public and private participants of the partnerships we reviewed identified common goals, individual roles, and funding sources and mechanisms, which have affected partnership outcomes. In some cases, these partnerships have supported railroad bridge and tunnel projects. A well- structured partnership balances the various strengths, limitations, and respective contributions of both the public sector—federal, state, local, and regional—and private sector participants in order to secure specific public and private freight-related benefits. Both the public and the private sectors have initiated freight railroad public-private partnerships. For example, according to AASHTO representatives we interviewed, in 2002 the Delaware DOT approached a Class I railroad to reopen the Shellpot Bridge, which had been out of service since 1994. The state associated the abandonment of this bridge with increased congestion on the Northeast Corridor and saw it as a threat to the competitiveness of the Port of Wilmington in attracting freight traffic. The state and the railroad jointly developed the project’s goals, roles, and funding mechanisms. The state agreed to finance the approximately $13.5 million cost of restoring the bridge by contributing $5 million in state grant appropriations and funding the remainder by issuing tax-exempt bonds. The railroad agreed to compensate the state over a 20- year period by paying a fee for each train car that uses the bridge. In another public-private partnership, members of the Kansas City Terminal Railway Companyand their project designer approached the state of Missouri and the Unified Government of Kansas City/Wyandotte County, Kansas, to propose assisting in financing the construction of two flyovers and the rehabilitation of a bridge. The purpose of these three infrastructure improvement projects was to separate freight trains from different railroads at several points where they came together to form what amounted to four-way stops for trains in the Kansas City region and caused a significant chokepoint on the U.S. freight railroad network (see fig. 7). The railroads had already determined the goals of their proposed public-private partnership and came to the bargaining table with proposed roles and funding mechanisms. The railroads acknowledged that they could pursue the project using strictly private market resources; however, a wholly private project would have taken longer to complete. The state and county saw value in relieving their communities of the grade-crossing congestion this chokepoint caused, determined the project risk was acceptable, and each agreed to issue tax-exempt bonds that totaled over $190 million, which will be repaid by the railroads through user fees. In both the Delaware and Kansas City cases, the entities that initiated the partnership brought well-defined goals, identified stakeholder roles, and guaranteed a set amount of funding to the public-private partnership over a period of years. Public-private partnerships can make funds available and define goals and roles for all stakeholders for large, expensive freight railroad projects when it is difficult for a public or private entity to fund the entire project on its own, or when a project is not part of a railroad’s strategic plan, but would be beneficial to a locality’s or a region’s quality of life. For example, public and private players bring various strengths and limitations to the partnerships. The private sector often can bring a more global view of freight needs to the project planning process, help identify and implement projects, contribute significant funds, and promote efficient use of infrastructure. The public sector can offer various public financing tools, such as low-interest loans and private activity bonds, to create incentives for private investments in freight railroads that would not otherwise be made and to generate anticipated public benefits. Public-private partnerships also present certain challenges. As we heard from both public and private freight railroad stakeholders, the extent to which the public sector can engage the private sector, identify anticipated public benefits from railroad investments, and provide funding that is commensurate with those benefits, affects partnership outcomes. Our past work has shown that an integral part of public-private partnerships is ensuring that sound analytical approaches are being applied locally and meaningful data are available, not only to evaluate and prioritize infrastructure investments but also to determine whether public support is justified in light of a wide array of social and economic costs and benefits. Moreover, as private entities that own most of the nation’s railroad infrastructure, freight railroads typically have not worked with the public sector because of concerns about the requirements and regulations associated with federal funding. These railroads need to be convinced that a proposed infrastructure project will yield financial returns for the company. Still another challenge is to reconcile the lengthy planning and construction time associated with public infrastructure projects with the shorter planning and investment horizons of private companies. Overcoming congestion and improving mobility is one of the biggest transportation challenges facing the nation. Congestion increases delays and creates economic losses that cost Americans roughly $200 billion a year, according to DOT estimates. As we have previously reported, increases in freight traffic on all modes over the next 10 to 15 years are expected to put greater strain on ports, highways, airports, and railroads. In addition, we have found that this increase in freight transportation demand seems to be particularly acute on highways, since trucks transport over 70 percent of all freight tonnage nationally and freight truck traffic on urban highways more than doubled from 1993 through 2001. The increased congestion, coupled with long lead times for completing infrastructure projects (5 to 15 years), may put pressure on all stakeholders, including the federal government, to find other more effective investments to increase freight mobility. Increasing the capacity of the nation’s freight railroad network could be one way to meet future growth in freight transportation demand. However, as mentioned previously, aging railroad bridges and tunnels present physical constraints to meeting this projected increased demand for freight railroad transportation on key routes, thereby constraining capacity. For example, as we previously mentioned, 100-year-old bridges and tunnels that are currently in use—such as the moveable bridge over the Mississippi River and the Howard Street Tunnel in Baltimore—create chokepoints on the freight railroad network due to their operating conditions or outdated design. Currently, freight railroads are investing billions of dollars in freight railroad infrastructure to increase capacity, but because they invest in projects that will maintain or increase safety or provide the highest return on its investment, other investments may take priority over their most expensive pieces of infrastructure, bridges and tunnels. In addition, we have found that the railroads’ long-term ability to meet the projected growth in demand for freight railroad transportation is uncertain, which may increase pressure for public investment in private railroad infrastructure. As we have previously reported, Congress is likely to receive further requests for funding and face additional decisions about how to invest in the nation’s freight railroad infrastructure. However, Congress’s ability to respond to these requests may be limited by (1) federal funding constraints and increased demand for infrastructure investment in other transportation modes, (2) differences in federal funding for different transportation modes, and (3) the lack of a strategic federal freight transportation plan to guide federal investments in freight transportation infrastructure. Revenue from current federal transportation sources may not be sustainable. Because revenue from traditional transportation funding mechanisms such as the Highway Trust Fund may not keep pace with the increase in transportation demand, we designated transportation financing as a high-risk area in January 2007. The recently enacted transportation funding authorization, the Safe, Accountable, Flexible, Efficient Transportation Equity Act – A Legacy for Users (SAFETEA-LU), is expected to outstrip the growth in trust fund receipts. As a result, the Department of the Treasury and the Congressional Budget Office (CBO) are forecasting that the trust fund balance will steadily decline and be negative by the end of fiscal year 2011. In addition, the nation’s long-term fiscal challenges will constrain decision makers’ ability to use other funding mechanisms, such as grants and tax expenditures, for transportation needs. Differences in federal funding for different transportation modes have created a competitive disadvantage for freight railroads. Because the federal government has an interest in an efficient national freight transportation system, the federal role in freight transportation needs to recognize that the freight transportation system encompasses many modes that operate in a competitive marketplace and are owned, funded, and operated by both the private and the public sectors. However, current federal transportation policy treats each freight transportation mode differently, thereby creating competitive advantages for some modes over others. For example, trucking companies and barges use infrastructure that is owned and maintained by the government, while railroads use infrastructure that they pay taxes on, own, and maintain. Trucking and barge companies pay fees and taxes for the government-funded infrastructure they use, but their payments generally do not cover the costs they impose on highways and waterways. The federal subsidy that makes up the difference between the government’s costs and users’ payments gives trucking and barge companies a competitive advantage over the railroads. CBO has observed that if all modes do not pay their full costs, the result is inefficient use of roads and waterways and greater government spending than otherwise would be necessary if capacity investments are made in anticipation of demand that does not occur. As noted earlier in this report, the federal government lacks a strategic freight transportation plan to guide its involvement in freight-related capital infrastructure investments. DOT’s draft Framework for a National Freight Policy represents an initial step toward such a plan, but it assumes a federal role without indicating whether federal involvement is appropriate or, when appropriate, what the goals of federal investment should be, what specific roles the federal government and other stakeholders should play, and what federal revenue sources and funding mechanisms should be used to support freight-related investments. As we have previously reported, critical factors and questions can be used as criteria for determining the appropriateness of a federal role and a framework with components that we believe would be helpful in guiding any future federal freight-related investments. Implementing this GAO framework would include setting national goals for federal investment in freight-related infrastructure, clearly defining federal and other stakeholder roles, and identifying sustainable revenue sources and cost- effective funding mechanisms that can be applied to maximize the national public benefits of federal investments. In light of the federal government’s long-term fiscal imbalance, it is important for federal policy makers to determine how the federal government can support efficient, mode-neutral, transparent, and sustainable investments in freight-related infrastructure. In our report on 21st century challenges facing the federal government, we defined critical factors and questions that are useful as criteria for determining the appropriate federal role in a government program, policy, function, or activity. These critical factors and questions are designed to address the legislative basis for a program, its purpose and continued relevance, its effectiveness in achieving goals and outcomes, its efficiency and targeting, its affordability and sustainability, and its management. The factors and questions can be used as criteria for determining the appropriateness of federal involvement in freight-related transportation, including freight railroad projects, as shown in table 2. If federal policy makers determine that there is an appropriate role for the federal government in freight infrastructure investments, including those related to railroads, the implementation of that role should have several components. From our past work on transportation investment—in such areas as intercity passenger rail, intermodal transportation, and marine transportation—we have defined a systematic framework that can also guide the implementation of any future federal role in freight-related infrastructure investments. Our framework’s components include setting national goals, establishing clear stakeholder roles, and providing sustainable funding (see table 3). In conjunction with GAO’s framework, it would also be important to evaluate freight investments periodically to determine the extent to which expected benefits are being realized. Evaluations also create opportunities for periodically reexamining established goals, stakeholder roles, and funding approaches, and provide a basis for modifying them as necessary. In addition, evaluations help to ensure accountability and provide incentives for achieving results. Encouraging or requiring the identification of all project costs and of all parties who will bear the costs can help ensure that the costs are apportioned among all stakeholders equitably. Leading private and public organizations that we have studied in the past have stressed the importance of developing performance measures and then linking investment decisions and their expected outcomes to overall strategic goals and objectives. The first component of GAO’s framework for guiding the federal role in freight-related infrastructure investment is a set of clearly defined national goals. Such goals can help chart a clear direction, establish priorities among competing demands, and specify the desired results of any federal investment. Since many stakeholders are involved in the freight transportation system, the achievement of national goals for the system hinges on the federal government’s ability to forge effective partnerships with nonfederal entities. Decision makers need to balance national goals with the unique needs and interests of all nonfederal stakeholders in order to leverage the resources and capabilities of state and local governments and the private sector. National goals should be structured in a way that allows for reliably estimating and comparing national public benefits and national public costs. As we have previously reported, quantifying public benefits can be difficult, yet an effort should be made to determine that the anticipated public benefits are sufficient to justify the proposed levels of public investment. For example, at the state level, the Pennsylvania DOT evaluates and justifies freight railroad investments, in part, by estimating the wear and tear imposed by trucks on highways. The primary goal of federal investments in freight infrastructure should be to maximize the national public benefits of the investments. One way to focus these goals could be through federally designated Projects of National and Regional Significance, a program that has been designed to address critical national economic and transportation needs and has funded highway and railroad infrastructure projects. For example, one goal could be to improve intermodal freight mobility—which encompasses air, railroad, water, and highway facilities and infrastructure—at designated ports of national significance that serve multistate regions and/or large populations. Federal policy makers and other stakeholders could define their respective roles in many different ways once the goals for the federal role in freight transportation infrastructure have been established. However, the key elements in defining the federal and other stakeholder roles would be to create incentives for collaboration, secure benefits, and promote equity for all stakeholders, both public and private, that invest in freight- related infrastructure projects. Defining these elements is especially important for the federal role in freight railroad infrastructure investments because, while most of that infrastructure is privately owned, investments to improve safety and increase capacity may benefit stakeholders at all levels (national, regional, state, local and private sector). In our prior work, we have found that, in defining stakeholder roles, it is important to match capabilities and resources with appropriate goals. This is important for federal participation because other stakeholders may want to emphasize other priorities and use federal funds in ways that may not achieve national public benefits. This can happen if other stakeholders seek to (1) transfer a previously local function to the federal arena or (2) use federal funds to reduce their traditional levels of commitment. One aim of federal participation in infrastructure investments is to promote or supplement expenditures that would not occur without federal funding— to avoid substituting federal funding for funding that would otherwise have been provided by private or other public investors. Further refinements to DOT’s draft Framework could help to define stakeholder roles in two ways, first by acknowledging that the interests of federal, state, and local entities may compete, and second by recognizing where public and private sector interests meet and diverge. When the federal government invests in freight railroad infrastructure, it could justify its involvement by establishing criteria for projects that (1) are based on national freight goals, (2) are designed to capture national freight transportation benefits, and (3) direct funds to state, local, and private entities that would spend the funds in accordance with the national goals. For example, the federal government might justify its investment in a project that had national goals of improving interstate freight mobility, reducing pollution and congestion, and enhancing safety on a multistate railroad and highway transportation corridor. In contrast, states and localities seek public benefits that accrue within their jurisdictions, such as improved automobile safety at grade crossings and reduced air pollution within a regional attainment area, and are able to channel state, local, and discretionary federal funds accordingly. When examining public versus private interests, public stakeholders must recognize that railroads are privately owned and invest resources to maximize shareholder returns and enhance the efficiency and capacity of their operations. Some railroad infrastructure projects have spillover effects that produce public benefits, such as more efficient goods movement. Yet other railroad infrastructure projects that could benefit the public do not meet railroads’ internal return-on-investment criteria, and therefore the railroads would not invest in them, and the public would not realize the benefits. One possible way of defining stakeholder roles could be through public- private partnerships. As we have stated earlier, public-private partnerships create a forum for bringing diverse stakeholders together around an issue of mutual interest to determine how best to share resources, identify stakeholder responsibilities, and achieve public and private benefits. Encouraging public-private partnerships to provide efficient solutions to freight transportation needs could increase the likelihood that the most worthwhile improvements would be implemented and that projects would be operated and maintained efficiently. One example of a public-private partnership that addresses various private and public stakeholder interests in railroad infrastructure is the CREATE project in the Chicago area. The drive to make significant investments in the Chicago area’s railroad infrastructure came from public and private railroad stakeholders because of their concern over the heavy railroad congestion in that area. Under the CREATE project, stakeholders established individual roles that included owning and managing specific projects and assuming joint financial obligations. The railroads initially invested $100 million to begin addressing their interests, the federal government has added $100 million by designating CREATE as a Project of National or Regional Significance, and the state of Illinois and the city of Chicago have pledged $100 million and $30 million, respectively, to begin addressing passenger railroad projects. CREATE stakeholders also plan to leverage other federal, state, and private funds over the lifetime of the project. The Alameda Corridor Program in the Los Angeles area provides another example of how effective partnering allowed the capabilities of the various stakeholders to be more fully utilized. Called the Alameda Corridor because of the street it parallels, the program created a 20-mile, $2.4 billion railroad express line connecting the ports of Los Angeles and Long Beach to the transcontinental railroad network east of downtown Los Angeles. The express line eliminates approximately 200 street-level railroad crossings, relieving congestion and improving freight mobility for cargo. This project made substantial use of local stakeholders’ ability to raise funds. While the federal government participated in the cost, its share was about 20 percent of the total. In addition, about 80 percent of the federal assistance is in the form of a loan rather than a grant. A well-designed and strategic national freight transportation policy—of which there is a federal component—can help encourage investment by other public and private stakeholders and maximize the application of limited federal dollars for freight-related infrastructure. While it is important to ensure that such a policy promotes federal investments in freight infrastructure that generate national public benefits, especially when those investments are in privately owned and operated freight railroad infrastructure, it is also important to note that any federal investments will face federal financial constraints. Although federal investments could be crucial to securing the national public benefits of certain freight-related infrastructure projects that would not otherwise proceed, the scarcity of federal funds puts a premium on justifying and targeting the use of federal funds for these projects to address critical needs and maximize benefits. As we have previously reported, determining the scope of government involvement in transportation investments entails three major steps: (1) determining that the project is worthwhile by applying a rigorous cost- benefit analysis or similar study; (2) justifying government involvement on the basis of known criteria; and (3) deciding on the level of public subsidy consistent with local, state, regional, or national interests and benefits. Currently, most federal freight investments come from the fiscally constrained General Fund and Highway Trust Fund; and typically these investments are not subject to a thorough benefit-cost analysis or to the consistent application of project criteria, nor are they funded with the assurance that the funding provided by public and private beneficiaries is commensurate with the benefits these parties receive. Federal investments in freight infrastructure must be justified and meet objective criteria to maximize the impact of federal funds. Justifying government involvement in freight infrastructure projects involves identifying and quantifying project costs and public and private benefits, and having clear guidelines specifying the conditions under which public involvement is warranted. Given constraints on federal, state, and local funding, we have advocated that public entities implement project justification tools such as benefit-cost analysis to better assess proposed transportation investments and accordingly target limited funds. Results- oriented assessments can be used to determine what is needed to obtain specific national outcomes. In October 2006, we recommended that DOT, as it continues to draft the Framework for a National Freight Policy, consider strategies to create a level playing field for all freight modes and recognize the highly constrained federal fiscal environment by developing mechanisms to assess and maximize public benefits from federally financed freight transportation investments. Furthermore, as we testified in March 2007, the federal government should make ensuring accountability for results, as well as maximizing benefits, high priorities in deciding on federal investments in transportation infrastructure. Unfortunately, we have found that formal analyses are not often used in deciding among alternative projects, evaluations of outcomes are not typically conducted, and the evaluations that are done show that projects often do not produce anticipated outcomes. The public sector faces many challenges in quantifying national, regional, state, and local benefits, while railroads are more able to determine the monetary and operational benefits of proposed infrastructure projects and can invest accordingly. For example, railroads can assess how much each hour of train delay costs them, but public entities cannot easily quantify the environmental benefits of faster freight railroad transport and less truck traffic. Representatives of three state DOTs we interviewed acknowledged the difficulty of quantifying public benefits, which may make it difficult to judiciously allocate scarce transportation funds to those projects that may accrue the highest public benefits. According to the Transportation Research Board (TRB), public support for freight infrastructure projects must be established on a project-by-project basis to determine if a project produces certain benefits, such as reductions in the external costs of transportation, efficiencies in the transportation system beyond those recognized by the private sector, or improvements in public safety. TRB stated that if government involvement cannot be justified on one of these grounds, the private sector should undertake the project. One federal program that awards funds using project justification criteria is the Federal Transit Administration’s discretionary New Starts program. This program is the federal government’s primary source of funds for capital investment in locally planned, implemented, and operated transit. Potential New Starts projects must meet certain project justification criteria (e.g., mobility improvements and operating efficiencies) and demonstrate adequate local financial support (e.g., the ability of the sponsoring agency to fund the operation and maintenance of the entire system once the project is built). A comparable approach could be designed so that freight railroad infrastructure investments—proposed by state or local governments, private railroads, or public-private partnerships—meet appropriate project justification criteria, demonstrate public and private support, and provide the lowest cost to the federal government. Different funding mechanisms and revenue sources could also be used to implement any future federal role in freight infrastructure investments. See appendix III for a more complete discussion of these revenue sources and funding mechanisms. Projected increases in freight transportation demand will likely increase the importance of the nation’s freight railroad infrastructure. Bridges and tunnels are critical and expensive parts of infrastructure. Because most of the freight railroad network is privately owned, the railroads have a keen financial interest in maintaining and investing in their bridges and tunnels. The federal role in overseeing the public safety of these structures, and in funding improvements to them, has been limited. Concerning the safety area, we have found in our prior work that a risk- management approach to oversight of companies’ overall management of safety risks provides an additional assurance of safety in conjunction with inspections. FRA has adopted this risk-management approach in applying its guidelines for bridge management during its bridge safety surveys of individual railroads. However, a more consistent and systematic approach in selecting railroads for bridge safety surveys based on data about railroads’ bridge management programs, such as whether or not the railroads have regular inspections by a qualified civil engineer and how they record and use that bridge inspection data, could enhance the effectiveness of the FRA’s limited resources available for bridge and tunnel safety. This approach could help target FRA’s limited bridge inspection resources toward railroads that present the greatest safety risk, especially numerous short lines that may have more deteriorated infrastructure and less technical and financial resources to maintain their bridges and tunnels. With respect to the federal role in freight-related infrastructure, including railroad bridges and tunnels, the federal approach to such investments needs to be better structured to maximize achieving national public benefits such as increased freight mobility, reduced congestion, and improved environmental quality. Although the current federal structure of loans, credits, and grants administered by different agencies with different missions from disparate funding sources may attain some national public benefits, that structure is not guided by a national freight strategy and may miss opportunities for an even higher return of national public benefits for federal expenditures. DOT has taken a first step in the direction of articulating such a strategy by developing its Framework for a National Freight Policy, but we believe that the agency needs to go further in developing a true national freight transportation strategy that can help organize and unify the current structure to achieve that higher return. Our past work on public investments in transportation has found that such a strategy should focus on national freight transportation related goals, involve all public and private stakeholders, and distribute costs equitably across all public and private beneficiaries. To enhance the effectiveness of its bridge and tunnel safety oversight function, we recommend that the Secretary of Transportation direct the Administrator of the Federal Railroad Administration to devise a systematic, consistent, risk-based methodology for selecting railroads for its bridge safety surveys to ensure that it includes railroads that are at higher risk of not following the FRA’s bridge safety guidelines and of having bridge and tunnel safety issues. To help better focus limited federal resources, we recommend that the Secretary of Transportation ensure that its draft Framework for a National Freight Policy : includes clear national goals for federal involvement in freight- related infrastructure investments across all modes, including freight railroad investments; establishes and clearly defines roles for all public and private stakeholders; and identifies funding mechanisms for federal freight-related infrastructure investments, including freight railroad investments, which provide the highest return in national public benefits for limited federal expenditures. We provided a draft of this report to DOT for review and comment prior to finalizing the report. DOT and FRA officials—including FRA’s Associate Administrator for Safety— generally agreed with the information in this report, and they provided technical clarifications, which we have incorporated in this report as appropriate. These officials agreed with the recommendation related to the methodology for selecting railroads for bridge safety surveys and said that they are already taking steps to implement it, and DOT officials said that they would consider the recommendation concerning changes to DOT’s draft Framework for a National Freight Policy. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to the appropriate congressional committees and to the Secretary of Transportation. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web Site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. To determine what information is maintained by railroads on the condition of their bridges and tunnels, and the contribution of this infrastructure to congestion, we reviewed documentation from railroads on bridge and tunnel data management policies, inspection procedures, sample inspection reports, and capital improvement plans. We also determined the federal role in collecting and reporting information on railroad bridges and tunnels by interviewing officials from federal agencies, state agencies, freight railroads, and industry associations (see table 4), and by reviewing bridge and tunnel data collected and maintained by these federal agencies. To determine to what extent bridges and tunnels contribute to freight railroad congestion, we reviewed literature on freight railroad congestion, railroad corridor plans, and freight demand studies to identify current levels of freight railroad congestion, major factors contributing to congestion, and proposed solutions. We also interviewed representatives from industry associations and railroads to understand how this information is used, what challenges railroads face in maintaining and replacing railroad bridges and tunnels, and what strategies railroads use to enhance capacity and alleviate congestion. We did not independently verify the accuracy of public or private bridge and tunnel condition information, inspection reports, or congestion information. In addition, we did not independently assess the conditions of bridges and tunnels. To identify the federal role in overseeing railroad bridge and tunnel safety, we reviewed public laws and interviewed officials from the public agencies and railroads listed in table 4. In particular, we discussed the Federal Railroad Administration’s (FRA) structural safety oversight role with FRA’s Chief Structural Engineer, all five FRA bridge specialists, and one FRA regional track specialist, and asked railroads about their interactions with FRA. We reviewed examples of FRA’s bridge safety survey documentation to determine the content of these surveys and what actions FRA takes after assessing a railroad’s bridge conditions. We also accompanied an FRA bridge specialist on a bridge safety survey and other informal bridge and tunnel observations. We reviewed examples of FRA emergency orders, compliance agreements, and structural observation reports to determine how FRA enforces its oversight role. Because there are more bridges than tunnels in the United States and because FRA has established a policy on bridge safety, we reviewed more information on railroad bridges than on tunnels. Moreover, because we used FRA’s records to understand FRA processes and actions, we did not independently verify the reliability of the data in this sample of FRA’s observation records. To determine how public funds are currently used for railroad infrastructure investments, including those for bridges and tunnels, we interviewed the entities included in table 4 and synthesized relevant information from these entities, as well as from the Federal Highway Administration and the Joint Committee on Taxation. We did not independently verify the accuracy of the self-reported cost information provided by the railroads, public agencies, and professional associations. We reviewed Department of Transportation’s (DOT) draft Framework for a National Freight Policy. We also analyzed pertinent legislation and analyzed and synthesized relevant information from our reports and other ongoing work. To determine what criteria and framework could be used to guide the future federal role in freight-related infrastructure investments, including those for railroad bridges and tunnels, we relied extensively on perspectives gained from our past work in transportation and infrastructure systems and federal investment strategies. We also reviewed DOT’s Draft Framework for a National Freight Policy. We used our prior work and conventional economic reasoning to identify key considerations regarding possible revenue sources and funding mechanisms for federal government support for freight-related infrastructure investment and to evaluate potential revenue sources and funding mechanisms on the basis of those considerations. In addressing all of our objectives, we conducted five site visits to observe the conditions of selected bridges and tunnels on Class I, II, and III railroads; understand maintenance and deterioration issues inherent in different geographies and structure types; interview railroad and state agency personnel who manage, inspect, and maintain these structures; interview railroad operations personnel who monitor traffic capacity and congestion and finance personnel who determine capital investment priorities and allocations; and meet with state and local transportation agency officials. For a complete list of all entities interviewed, including those interviewed as part of our site visits, see table 4. We selected our site visit locations— Baltimore, Maryland and Washington, D.C.; Illinois and Iowa; Kansas and Missouri; Ohio and West Virginia; and Oregon—based on geographic distribution and the presence of large and small railroads, private-public partnership stakeholders, and state DOTs involved in freight railroad or large freight railroad public-private partnerships. In addition to interviews conducted as part of our site visits, we interviewed representatives from the six largest Class I freight railroads in the United States; Amtrak; industry associations; federal, state, and local transportation officials; and federal agencies involved with collecting information on, overseeing, or providing funding for railroad bridges and tunnels. We also interviewed additional state agencies based on their involvement in railroad bridge and tunnel oversight, freight railroad funding, or major freight railroad public-private partnerships. Table 4 lists the names and locations of all railroads; federal, state, and local agencies; industry associations; and transportation, engineering, and academic experts we interviewed as part of our review. Different funding mechanisms and revenue sources can be used to implement any future federal role in freight infrastructure investments. Two main revenue sources are available to the federal government in financing freight infrastructure investments: (1) general revenue, which comes primarily from broad-based personal and business income taxes and (2) beneficiary financing revenue (such as user fees or fuel taxes), which comes from taxes or fees assessed to specific groups that would benefit from the federal investment. Revenue from both of these sources could be used to increase investment in freight railroad infrastructure beyond the level that the railroads would provide without federal support. We note, however, that all revenue sources do have opportunity costs, that is, the costs of any benefits forgone from alternative investments that could have been made with that revenue. As discussed earlier in this report, the federal government currently uses three main funding mechanisms to support freight railroad infrastructure: grants, loans, and tax credits. Each funding mechanism has its own advantages and limitations, but some implications would apply to each. For example, while the three mechanisms may make federal subsidies available for freight infrastructure investments, they may not necessarily increase the total amount of funding provided for those investments. Instead, these subsidies might result in the substitution of federal funds for the railroads’ own funds for investments that they would have made themselves, even without federal support. Revenue sources and potential funding mechanisms need to be evaluated in terms of several key considerations—including equity, sustainability, and efficiency for revenue sources, and efficiency and transparency for funding mechanisms—as discussed below. Equity - Equity is often assessed according to two principles: the benefit principle and the ability-to-pay principle. Equity occurs according to the benefit principle when those who pay for a service are the same as those who benefit from the service. Under the ability-to-pay principle, those who are more capable of bearing the burden of taxes or fees pay more in taxes and fees than those with less ability to pay, and a tax or fee structure is generally considered more equitable if that is the case. The use of general revenues is most equitable according to the benefit principle when the benefits are diffused across all taxpayers. Benefit financing sources (per- container or per-railroad-car fees or commodity-specific taxes) can be a more equitable funding source when the benefits are more focused on a locality or set of users and it is possible to collect the additional revenues from beneficiaries through higher fees or taxes. Either approach could be consistent with the ability-to-pay principle depending on how the revenue source is structured. A combination of beneficiary financing, federal general revenue, and local matching funds could also be used to enhance equity in order to link the amount of payment for an infrastructure investment to the anticipated amount of private, national, and local benefits gained, although these benefits may be hard to quantify. Sustainability - Sustainability can be defined as the ability of a revenue source to maintain a given level of federal expenditure for an investment over time. Technological change or inflation could affect the sustainability of some beneficiary financing revenue sources by influencing revenue levels or their purchasing power. But these sources can be more sustainable if they have the flexibility to respond to reductions in demand or consumption and can be indexed to inflation or otherwise periodically adjusted. The sustainability of general revenue could be affected by the federal government’s long-term structural fiscal imbalance. Efficiency - Efficiency implications exist for both the choice of revenue source and the choice of funding mechanism. For revenue sources, efficiency can be assessed based on the impact of economic behavioral changes likely to result from use of each source and by how much accountability is provided. Using general revenue rather than beneficiary financing revenue sources is likely to cause smaller behavioral changes than using beneficiary financing. Beneficiary financing is likely to cause larger behavioral changes in raising a given amount of revenue because the impacts of a revenue increase would be more concentrated in a geographic location (for example, a user fee assessed for using a specific bridge or other structure) or on a group of beneficiaries (for example, a diesel fuel tax assessed only on railroads). However, these behavioral changes can have either negative or positive consequences on economic efficiency, such that in different circumstances increasing revenues from either funding source could be less efficient or more inefficient. In terms of accountability, the efficiency of a revenue source can be enhanced by collecting funds from the groups that are benefiting from federal investments in freight infrastructure. For funding mechanisms, efficiency can be defined as the amount of benefit gained for the amount of federal resources provided. Grants may generally be more efficient than loans in that their administrative costs may be lower. For tax credits, efficiency— or the benefits gained for the forgone tax revenue—is both difficult to calculate and difficult to control, because private firms often control the use of the credited funds rather than the government. Therefore, the government may have less opportunity to direct the funds toward generating specified national public benefits than it does for grants or loans. To increase the efficiency of grants, maintenance of effort provisions could be incorporated to decrease the likelihood that the funding provided through them will be substituted for other funds, rather than combined with other funds to increase the total investment. Although tax credits do not involve outlays of federal funds, they do have analogous costs in forgone tax revenue that would have to be considered in evaluating their efficiency. Transparency - Transparency can be defined as the extent to which the costs of federal infrastructure investments are visible when using a funding mechanism. The commitment of federal resources is visible if there is a direct appropriation for a federal grant or loan program. With a grant or a loan, the federal government can readily demonstrate how much money was invested in what infrastructure. These funding mechanisms can also be guided by objective, transparent criteria in conjunction with congressional control over annual funding levels. With tax credits for railroad infrastructure investment, however, it is less visible how much the investment is costing the government through forgone revenue, and it is harder for Congress to make trade-offs with other discretionary spending programs. In addition to the contact named above, Rita Grieco (Assistant Director); Jay Cherlow; Steve Cohen; Elizabeth Eisenstadt; Alana Finley; Greg Hanna; Carol Henn; Bert Japikse; Richard Jorgenson; Denise McCabe; Elizabeth McNally; Sara Ann Moessbauer; Josh Ormond; Laura Shumway; Ryan Siegel; and James Wozny made key contributions to this report. Performance and Accountability: Transportation Challenges Facing the Congress and the DOT. GAO-07-545T. Washington, D.C.: March 6, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Rail Safety: The Federal Railroad Administration Is Taking Steps to Better Target Its Oversight, but Assessment of Results Is Needed to Determine Impact. GAO-07-149. Washington, D.C.: January 26, 2007. Intercity Passenger Rail: National Policy and Strategies Needed to Maximize Public Benefits from Federal Expenditures. GAO-07-15. Washington, D.C.: November 13, 2006. Freight Railroads: Industry Health Has Improved, but Concerns about Competition and Capacity Should Be Addressed. GAO-07-94. Washington, D.C.: October 6, 2006. Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined. GAO-05-690. Washington, D.C.: September 23, 2005. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003. Marine Transportation: Federal Financing and a Framework for Infrastructure Investments. GAO-02-1033. Washington, D.C.: September 9, 2002. Surface and Maritime Transportation: Developing Strategies for Enhancing Mobility: A National Challenge. GAO-02-775. Washington, D.C.: August 30, 2002. U.S. Infrastructure: Funding Trends and Federal Agencies’ Investment Estimates. GAO-01-986T. Washington, D.C.: July 23, 2001. Executive Guide: Leading Practices in Capital Decision Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. Federal Budget: Choosing Public Investment Programs. GAO/AIMD-93- 25. Washington, D.C.: July 23, 1993. Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness. GAO/RCED-92-16. Washington, D.C.: November 5, 1991.
Freight railroads account for over 40 percent (by weight) of the nation's freight on a privately owned network that was largely built almost 100 years ago and includes over 76,000 railroad bridges and over 800 tunnels. As requested, GAO provides information on this infrastructure, addressing (1) the information that is available on the condition of railroad bridges and tunnels and on their contribution to railroad congestion, (2) the federal role in overseeing railroad bridge and tunnel safety, (3) the current uses of public funds for railroad infrastructure investments, and (4) criteria and a framework for guiding any future federal role in freight infrastructure investments. GAO reviewed federal bridge safety guidelines and reports, conducted site visits, and interviewed federal, state, railroad, and other officials. Little information is publicly available on the condition of railroad bridges and tunnels and on their contribution to congestion because the railroads consider this information proprietary and share it with the federal government selectively. Major (Class I) railroads maintain detailed repair and inspection information, while other (Class II and III) railroads vary, from keeping detailed records, to lacking basic condition information. Despite their age, bridges and tunnels are not the main cause of congestion, although some do constrain capacity. Because bridge and tunnel work is costly, railroads typically make other investments to improve mobility first. The federal role in overseeing the safety of railroad bridges and tunnels is limited because the Federal Railroad Administration (FRA) has determined that most railroads are sufficiently ensuring safe conditions. FRA has issued bridge management guidelines, makes structural observations, and may take enforcement actions to address structural problems. However, FRA bridge specialists use their own, not a systematic, consistent, risk-based, methodology to select smaller railroads for safety surveys and therefore may not target the greatest safety threats. Federal funds are used to meet many different goals, but are not invested under any comprehensive national freight strategy, nor are the public benefits they generate aligned with any such strategy. Some state investments are structured to produce state and local economic and safety benefits, and public-private partnerships have facilitated investments designed to produce public and private benefits. GAO has identified critical questions that can serve as criteria for reexamining the federal role in freight investments--including railroad bridge and tunnel investments--and a framework for implementing that role that includes identifying national goals, clarifying stakeholder roles, and ensuring that revenue sources and funding mechanisms achieve maximum national public benefits. The Department of Transportation's draft Framework for a National Freight Policy takes a step forward, but more is needed to guide the implementation of a federal role in freight transportation investments.
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business and is especially important for government agencies, where maintaining the public’s trust is essential. While the dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have enabled corporations such as FDIC to better achieve its mission and provide information to the public, the changes also expose federal networks and systems to various threats. For example, the Federal Bureau of Investigation has identified multiple sources of cyber threats, including foreign nation states engaged in information warfare, domestic criminals, hackers, virus writers, and disgruntled employees working within an organization. According to a May 2005 report by the U.S. Secret Service and the Computer Emergency Response Team (CERT) Coordination Center, “insiders pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” These concerns are well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. For example, the number of incidents reported by federal agencies to the United States Computer Emergency Readiness Team (US-CERT) has increased dramatically over the past 3 years, increasing from 3,634 incidents reported in fiscal year 2005 to 13,029 incidents in fiscal year 2007 (about a 259 percent increase). Without proper safeguards, systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain or manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Our previous reports, and those by inspectors general, describe persistent information security weaknesses that place federal agencies at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains in force today. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide information security for the information and systems that support the operations and assets of the agency, using a risk- based approach to information security management. FDIC is an independent agency created by Congress that maintains the stability and public confidence in the nation’s financial system by insuring deposits, examining and supervising financial institutions, and managing receiverships. Congress created FDIC in 1933 in response to the thousands of bank failures that occurred in the 1920s and early 1930s. The corporation identifies, monitors, and addresses risks to the deposit insurance funds when a bank or thrift institution fails. The Bank Insurance Fund and the Savings Association Insurance Fund were established as FDIC responsibilities under the Financial Institutions Reform, Recovery, and Enforcement Act of 1989, which sought to reform, recapitalize, and consolidate the federal deposit insurance system. The act also designated FDIC as the administrator of the Federal Savings & Loan Insurance Corporation Resolution Fund, which was created to complete the affairs of the former Federal Savings & Loan Insurance Corporation and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. The Bank Insurance Fund and the Savings Association Insurance Fund merged into the Deposit Insurance Fund on February 8, 2006, as a result of the President signing the Federal Deposit Insurance Reform Act of 2005 into law. With the congressional approval of the Federal Deposit Insurance Reform Act of 2005, FDIC was required to ensure that approximately 7,400 eligible member institutions received a one-time assessment credit totaling $4.7 billion. FDIC insures deposits in excess of $4 trillion for its 8,571 member institutions. It had a budget of about $1.1 billion for calendar year 2007 to support its activities in managing the funds. For that year, it processed almost 16.4 million financial transactions. FDIC relies extensively on computerized systems to support its financial operations and store the sensitive information that it collects. Its local and wide area networks interconnect these systems. To support its financial management functions, the corporation relies on many systems including the NFE, a corporate-wide effort focused on implementing an enterprisewide, integrated software system. In addition, the corporation relies on the AIMS II to calculate and collect FDIC deposit insurance premiums and Financing Corporation bond principal and interest amounts from insured financial institutions. FDIC financial systems also process and track financial transactions such as disbursements made to support operations. Under FISMA, the Chairman is responsible for, among other things, (1) providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the corporation’s CIO the authority to ensure compliance with the requirements imposed on the agency under FISMA. Two deputies to the Chairman—the Chief Financial Officer and Chief Operating Officer—have information security responsibilities. The Chief Financial Officer has information security responsibilities insofar as he is part of a senior management group that oversees the NFE and AIMS II security team. He is also responsible for the preparation of financial statements and ensures that they are fairly presented and demonstrate discipline and accountability. In addition, the Chief Operating Officer has information security responsibilities. He supervises the CIO, who is responsible for developing and maintaining a corporate-wide information security program and for developing and maintaining information security policies, procedures, and control techniques that address all applicable requirements. The CIO also serves as the authorizing official with the authority to approve the operation of the information system at an acceptable level of risk to the enterprise. The CIO supervises the Chief Information Security Officer, who is in charge of information security at the corporation. The Chief Information Security Officer serves as the CIO’s designated representative responsible for the overall support of the certification and accreditation activities. FDIC has made significant progress in mitigating previously reported information security weaknesses. Specifically, it has corrected or mitigated 16 of the 21 weaknesses that we had previously reported as unresolved at the completion of the 2006 audit (see app. II). For example, FDIC has enhanced physical security controls, instructed personnel to use more secure e-mail methods to protect the integrity of certain accounting data transferred over an internal communication network, updated the NFE security plan to clearly identify all common security controls, developed procedures to report computer security incidents, and updated the NFE contingency plan. While the corporation has made significant progress in resolving known weaknesses, it has not completed actions to mitigate the remaining five weaknesses. Specifically FDIC has not effectively generated NFE audit reports; maintained a complete listing of all NFE configuration items, including application software, data files, software development tools, hardware, and documentation; properly segregated incompatible system-related functions, duties, and capacities for an individual associated with the NFE; effectively implemented or accurately reported the status of its remedial properly updated the NFE risk assessment. FDIC stated it has initiated and completed some actions to mitigate the remaining five prior year weaknesses. However, we have not verified that these actions have been completed. Not addressing these actions could leave the corporation’s financial data vulnerable to an increased risk of unauthorized access and manipulation. Appendix II describes the previously reported weaknesses in information security controls that were unresolved at the time of our prior review and the status of the corporation’s corrective actions. Although FDIC has made significant progress improving its information system controls, old and new weaknesses could limit the corporation’s ability to effectively protect the confidentiality, integrity, and availability of its financial systems and information. In addition to the five previously reported weaknesses that remain unresolved, newly identified weaknesses in access controls and configuration management controls introduce risk to two key financial systems. A key reason for these weaknesses is that FDIC did not always fully implement key information security program activities. As a result, increased risk exists of unauthorized disclosure or modification of financial information. A basic management objective for any organization is to protect the resources that support its critical operations and assets from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. FDIC developed policies and procedures on access control which, among other things, stated that login ID and password combinations should not be shared, access to application source code should be restricted unless users have a legitimate business need for access, and passwords should be adequately encrypted. However, FDIC did not always implement certain access controls, as the following examples show: Multiple FDIC users in a production control unit in one division and multiple users in another division share the same NFE logon ID and password. As a result, increased risk exists that individual accountability for authorized, as well as unauthorized system activity could be lost. All users of the AIMS II application have full access to the application production code although their job responsibilities do not require such access. As a result, increased risk exists that individuals could circumvent security controls and deliberately or inadvertently read, modify, or delete critical source code. One database connection could be compromised because the password is not adequately encrypted with a Federal Information Processing Standards 140-2 compliant algorithm. As a result, increased risk exists that the database could be compromised by unauthorized individuals who could then potentially change, add, or delete information. Our Federal Information System Controls Audit Manual states that configuration management involves the identification and management of security features for all hardware and software components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. An effective configuration management process consists of four primary areas, each of which should be described in a configuration management plan and implemented according to the plan. The four are as follows: Configuration identification: procedures for identifying, documenting, and assigning unique identifiers (for example, serial number and name) to requirements, design documents, and the system’s hardware and software component parts, generally referred to as configuration items; Configuration control: procedures for evaluating and deciding whether to approve changes to a system’s baseline configuration; decision makers such as a Configuration Control Board evaluate proposed changes on the basis of costs, benefits, and risks, and decide whether to permit a change; Configuration status accounting: procedures for documenting the status of configuration items as a system evolves; and Configuration auditing: procedures for determining traceability between the actual system and the documentation describing it (such as requirements documentation), thereby ensuring that the documentation used to support decision making is complete and correct. Configuration audits are performed when a significant system change is introduced and help to ensure that only authorized changes are being made and that systems are operating securely and as intended. FDIC has made progress in implementing each of the four configuration management areas. Specifically, for configuration identification, FDIC has documented procedures for identifying and assigning unique identifiers and naming configuration items. For configuration control, it has documented procedures for requesting changes to configuration items, established configuration management plans that document employee roles and responsibilities, developed a Change Control Board that reviews changes to configuration items, and implemented configuration management tools. In addition, for configuration status accounting, FDIC has developed configuration management status accounting reports. Further, for configuration auditing, it has conducted testing and evaluation of releases. However, FDIC has not executed adequate controls over the configuration management of the NFE and AIMS II information system components. Specifically, it did not adequately (1) maintain a full and complete baseline for system requirements; (2) assign unique identifiers to configuration items; (3) authorize, document, and report all configuration changes; and (4) perform configuration audits. As a result, increased risk exists that functional requirements for these system components were not adequately implemented, managed, or maintained. In addition, increased risk exists that inconsistencies among requirements were not identified, and documents were not correctly associated with the correct releases. An entity should maintain current configuration information in a formal configuration baseline that contains the configuration information formally designated at a specific time during a product’s or product component’s life. The Software Engineering Institute’s Capability Maturity Model® Integration (CMMI) defines a baseline as a set of specifications or work products that has been formally reviewed and agreed on, which thereafter serves as the basis for further development or delivery, and that can be changed only through change control procedures. The NFE configuration management plan states that a baseline is a set of configuration items and their corresponding changes. The plan also states that changes to the requirements baseline should be controlled as part of configuration management throughout the life of the product. FDIC did not maintain a full and complete requirements baseline for NFE and AIMS II. For example, it could not provide a complete history of all approved requirements and changes to those requirements for NFE. Furthermore, although FDIC officials have stated that RequisitePro is the system of record for requirements, not all requirements for NFE or AIMS II were in RequisitePro. For example, requirements that were documented in the Software Requirement Specification (SRS) and architecture design documents were not included in RequisitePro. As a result, increased risk exists that requirements for these two systems were not adequately implemented, managed, or maintained and that the system may not function as intended. Software Engineering Institute’s CMMI and the FDIC configuration management plan state that configuration items should have unique identifiers and naming conventions. Identifying items that fall under configuration management control is a key step in the configuration management process. A consistent naming convention for configuration items is important to ensure that requirements are consistently and uniquely identified, verifiable, and traceable. When the requirements have unique identifiers and are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to the source. Such bidirectional traceability through unique identifiers helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source. FDIC did not consistently assign or use unique identifiers to identify or trace NFE and AIMS II configuration items such as requirements. Specifically, FDIC assigned multiple identifiers for the same requirement and did not always use the assigned identifiers to identify requirements in certain documents. For example, as illustrated in table 1 as follows: NFE used “SR numbers” to identify requirements in the implementation report, test plan, test summary, and RequisitePro traceability matrix report but not in the SRS and the design document. The NFE requirement numbers on the implementation report and the RequisitePro traceability matrix report were different compared with those identified on the test plan and test summary for the same requirement. For example, the configuration item identifier for change request 4739 was “SR36” on the implementation report and the RequisitePro traceability matrix, but was “SR7” on the test plan and test summary. FDIC also did not consistently assign or use unique identifiers to identify or trace AIMS II requirements. For example, the following illustrates this also in table 2: AIMS II uses “paragraph numbers” to identify requirements in the SRS, test plan, and RequisitePro traceability matrix report but not in the architecture design document or some instances in the test summary. The SRS paragraph number for one particular requirement is described as located at 3.1.1.6; however, the RequisitePro traceability matrix report points to the wrong paragraph number 3.1.1.2 and introduces another identifier “REQS2.” As a result of the lack of consistency in assigning and using unique identifiers for requirements, FDIC had many problems in tracing requirements. For example, our review of the AIMS II release 10.0 SRS, Software Architecture Document, test summary, and RequisitePro reports showed several misalignments in 96 requirements numbers described in the RequisitePro traceability matrix. The following are examples: Requirements 3.1.2.7 to 3.1.2.26 are documented in the test summary document but do not appear in the RequisitePro report. Requirements 3.1.4.15 through 3.1.4.26 were missing from the SRS and test summary, though they were documented in the RequisitePro report. A requirement is also traced to SRS 3.1.1.19 when there is no SRS paragraph 3.1.1.19. Table 3 illustrates an example of a misaligned AIMS II requirement (3.1.1.8). In this example, “high priority” requirement REQS 8 on the RequisitePro traceability matrix is linked to a requirement in the SRS described as paragraph 3.1.1.8. As can be seen, the requirement has the same number, but the requirement is not the same. Consequently, traceability cannot be adequately established from the source requirement to its lower level requirements and from the lower level requirements back to the source to ensure that all source requirements have been completely addressed. The Software Engineering Institute’s CMMI and the FDIC configuration management plan state that an entity should properly control all configuration changes. This covers a wide range of activities to include the following: a change control board should authorize and approve all configuration changes, change requests should be adequately documented, and status accounting reports should allow users to see baselines, trace requirements throughout the release, and be accurate. However, FDIC did not adequately authorize, document, and report all configuration changes. The FDIC Change Control Board did not authorize and approve all configuration changes for NFE and AIMS II. For example, PeopleSoft access control changes were not made through the Change Control Board. Change requests were not adequately documented. For example, implementation date and version number were left out on all change requests for NFE and AIMS II. Status accounting reports neither showed baselines, traced requirements throughout the release, nor were accurate. For example, FDIC could not generate a complete requirements baseline report for NFE or AIMS II. In addition, it could not produce configuration management reports of all PeopleSoft configuration items. Furthermore, traceability reports were manually generated and had many errors. As a result, increased risk exists that unauthorized changes could be made or introduced to FDIC’s systems. Software Engineering Institute’s CMMI and the FDIC configuration management plans state that configuration audits should be conducted to verify that the teams are following the configuration management process and to ensure all approved items are built. These audits consist of a physical and functional configuration audit. The physical audit consists of validating and verifying that all items are under configuration management control, configuration items are identified, and team members are following the configuration management process. Another type of configuration audit that must be conducted is the functional configuration audit. A functional configuration audit consists of tracing configuration items from requirements and design to the final delivered release baseline. FDIC performed limited configuration auditing of NFE and AIMS II. For example, both NFE and AIMS II had developed auditing check lists and made sure independent testing was conducted. However, FDIC did not adequately ensure that configuration audits verified and validated the configuration management process and ensured that all approved items were built. For example, FDIC did not verify and validate in a physical audit that all items are under configuration management control since changes were being made without the Configuration Control Board’s approval. In addition, teams were not assigning unique identifiers as required by the configuration management plans. Furthermore, FDIC did not verify and validate in a functional audit that adequate traceability existed since requirements could not be traced backward and forward from design to the final delivered release baseline. As a result, the risk exists that the configuration audits did not adequately verify and validate that functional requirements were adequately implemented, managed, and maintained. FDIC has made important progress in implementing the corporation’s information security program; however, a key reason for these information security weaknesses is that FDIC did not always fully implement key information security program activities. FDIC requires its components to implement information security program activities in accordance with FISMA requirements, Office of Management and Budget (OMB) policies, and applicable National Institute of Standards and Technology (NIST) guidance. Among other things, FISMA requires agencies to develop, document, and implement periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; plans for providing adequate information security for networks, facilities, security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FDIC has taken several actions to implement elements of its information security program. For example, FDIC has included nonmajor applications in major systems security plans and developed a new security plan template; implemented a risk assessment process that identified possible threats and vulnerabilities to its systems and information, as well as the controls needed to mitigate potential vulnerabilities; implemented a test and evaluation process to assess the effectiveness of information security policies, procedures, and practices; ensured that vulnerabilities identified during its tests and evaluations are addressed in its remedial action plans; established a system for documenting and tracking corrective actions; recognized that NFE users are not physically or logically separated in terms of what they are allowed to access within NFE; implemented an incident handling program, including establishing a team and associated procedures for detecting, responding to, and reporting computer security incidents; developed an incident response policy to review events related to data loss, disclose, inappropriate access and loss of equipment in the Division of Finance to determine whether the events are computer security incidents; and developed the corporation’s business continuity of operations, updated the contingency plans and business impact analyses, and assessed the effectiveness of the plans through testing at a disaster recovery site. However, FDIC did not always fully implement key information security program activities for NFE and AIMS II. For example, it did not adequately conduct configuration control testing or complete remedial action plans in a timely manner to include key information. Until FDIC fully performs key information security program activities, its risk is increased because it may not be able to maintain adequate control over its financial systems and information. A key element of an information security program is testing and evaluating system configuration controls to ensure that they are appropriate, effective, and comply with policies. According to NIST, the organization should (1) develop, document, and maintain a current baseline configuration of the information system and update the baseline configuration of the information system and (2) assess the degree of consistency between system documentation and its implementation in security tests, to include tests of configuration management controls. FDIC did not adequately test NFE configuration management controls. We found that the depth of FDIC’s system testing and evaluation for configuration management controls were insufficient since we identified vulnerabilities in the configuration management process during our testing that FDIC did not. Specifically, the NFE system test and evaluation report stated that FDIC developed, documented, and maintained a current baseline configuration; however, as we have previously stated in the report, we found that FDIC did not maintain a full and complete requirements baseline for NFE. In addition, the NFE system test and evaluation stated that FDIC authorizes and controls changes to the information system; however, as we have previously stated in the report, we found that some configuration changes were not being authorized and controlled by the Configuration Control Board. Furthermore, the NFE system test and evaluation stated that configuration items were uniquely identified and stored in configuration management libraries, yet we found FDIC had problems assigning unique identifiers to configuration items for NFE. As a result, without adequate tests and evaluations of configuration management controls, FDIC has limited assurance that the nature of configuration controls are being effectively tested and reported. A remedial action plan is a key component described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires that agencies’ remedial action plans (also known as plan of action and milestones) include the resources necessary to correct an identified weakness. According to FDIC policy, the agency should document weaknesses found during security assessments. The policy further requires that FDIC track the status of resolution of all weaknesses and verify that each weakness is corrected. The NFE remedial action plan was not completed in a timely manner and did not include necessary and key information. FDIC performed a system test and evaluation of NFE in November 2007 and developed a plan of action and milestones to correct any identified weaknesses. However, the plan of action and milestones report did not contain necessary and key information such as the contact that will be responsible for the corrective action, when the action will be closed, and status of the action. For example, the plan of action and milestones document included problems with the PeopleSoft security roles and functions; however, it did not state how FDIC would address these issues. FDIC officials stated that they were in the process of completing the plan of action and milestones with the required information but had not established a milestone date for doing so. Until the plan contains necessary and key information, FDIC’s assurance is reduced that the proper resources will be applied to known vulnerabilities or that those vulnerabilities will be properly mitigated. FDIC has made significant progress in correcting previously reported weaknesses and has taken steps to improve information security. Although five weaknesses from prior reports remain unresolved and new control weaknesses related to access control and configuration management were identified, the remaining unresolved weaknesses previously reported and the newly identified weaknesses did not pose significant risk of material misstatements in the corporation’s financial statements for calendar year 2007. However, these weaknesses increase preventable risk to the corporation’s financial and sensitive systems and information and warrant management’s immediate attention. A key reason for these weaknesses is that FDIC did not always fully implement key information security program activities. Continued management commitment to mitigating known information security weaknesses in access controls and configuration management and fully implementing its information security program will be essential to ensure that the corporation’s financial information will be adequately protected from unauthorized disclosure, modification, or destruction, and its management decisions may be based on reliable and accurate information. In order to sustain progress to its program, we recommend that the Chief Operating Officer direct the CIO to take the following 10 actions: Improve access controls by ensuring that NFE users do not share login ID and password accounts; AIMS II users do not have full access to application source code, unless they have a legitimate business need; and the database connection is adequately encrypted with passwords that comply with FIPS 140-2. Improve NFE and AIMS II configuration management by ensuring that full and complete requirement baselines are developed and implemented; configuration items have unique identifiers; configuration changes are properly authorized, documented, and reported; physical configuration audits verify and validate that all items are under configuration management control, all changes made are approved by the configuration control board, and that teams are assigning unique identifiers to configuration items; and functional configuration audits verify and validate that requirements have bidirectional traceability and can be traced from various documents. Improve the security management of NFE and AIMS II by ensuring that users adequately test configuration management controls as part of the system test and evaluation process and develop in a timely manner a detailed plan of action and milestones to include who will be responsible for the corrective action, when the action will be closed, and status of the action for NFE. We received written comments on a draft of this report from FDIC’s Deputy to the Chairman and Chief Financial Officer (which are reprinted in app. III). The Deputy stated that FDIC concurred with one recommendation and partially concurred with the remaining nine. He added that, in general, FDIC found the issues to be more limited than presented in the draft report, yet FDIC has taken action or will take action to improve configuration management and information security. We believe that the issues we presented in the report are accurately presented and can increase the risk of unauthorized disclosure, modification, or destruction of the corporation’s financial information and that management decisions may be based on unreliable or inaccurate information. Regarding the nine recommendations to which FDIC partially concurred, the Deputy stated that the corporation has developed or implemented plans to adequately address the underlying risks that prompted these nine recommendations, and in some instances, pursued alternative corrective actions. If the corporation effectively implements the alternative corrective actions to reduce risk, it will satisfy the intent of the recommendations. In addition, the Deputy provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the House Committee on Financial Services; members of the FDIC Audit Committee; officials in FDIC’s divisions of information resources management, administration, finance; the FDIC inspector general; and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at wilshuseng@gao.gov and barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of our review were to assess (1) the progress the Federal Deposit Insurance Corporation (FDIC) has made in mitigating previously reported information security weaknesses and (2) the effectiveness of FDIC’s controls in protecting the confidentiality, integrity, and availability of its financial systems and information. An integral part of our objectives was to support the opinion on internal control in GAO’s 2007 financial statement audit by assessing the controls over systems that support financial management and the generation of the FDIC funds’ financial statements. To determine the status of FDIC’s actions to correct or mitigate previously reported information security weaknesses, we identified and reviewed its information security policies, procedures, and guidance. We reviewed prior GAO reports to identify previously reported weaknesses and examined FDIC’s corrective action plans to determine which weaknesses FDIC had reported were corrected. For those instances where FDIC reported it had completed corrective actions, we assessed the effectiveness of those actions. To determine whether controls over key financial systems were effective, we tested the effectiveness of information security and information technology-based internal controls. We concentrated our evaluation primarily on the controls for financial applications, enterprise database applications, and network infrastructure associated with the New Financial Environment (NFE) release 1.43 and the Assessment Information Management System II (AIMS II) release 10.0 applications. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using NIST standards and guidance, and FDIC’s policies, procedures, practices, and standards, we evaluated controls by observing methods for providing secure data transmissions across the network to determine whether sensitive data was being encrypted; testing and observing physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; evaluated the control configurations of selected servers and database inspecting key servers and workstations to determine whether critical patches had been installed or were up-to-date; examining access responsibilities to determine whether incompatible functions were segregated among different individuals; and, observing end-user activity pertaining to the process of preparing FDIC financial statements. Using the requirements of the Federal Information Security Management Act (FISMA), which establishes key elements for an effective agencywide information security program, we evaluated FDIC’s implementation of its security program by reviewing FDIC’s risk assessment process and risk assessments for two key FDIC systems that support the preparation of financial statements to determine whether risks and threats were documented consistent with federal guidance; analyzing FDIC’s policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; analyzing security plans to determine if management, operational, and technical controls were in place or planned and that security plans were updated; examining training records for personnel with significant security responsibilities to determine if they received training commensurate with those responsibilities; analyzing configuration management plans and procedures to determine if configurations are being managed appropriately; analyzing security testing and evaluation results for two key FDIC systems to determine whether management, operational, and technical controls were tested at least annually and based on risk; examining remedial action plans to determine whether they addressed vulnerabilities identified in the FDIC’s security testing and evaluations; and examining contingency plans for two key FDIC systems to determine whether those plans had been tested or updated. We also discussed with key security representatives and management officials, whether information security controls were in place, adequately designed, and operating effectively. We conducted this audit work from October 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix describes the status of the information security weaknesses we reported last year. It also includes the status of weaknesses from previous reports that were not fully implemented during the time of our last review. 1. FDIC did not effectively limit network access to sensitive personally identifiable and business proprietary information. Audit and monitoring of security-related events 2. FDIC did not effectively generate NFE audit reports or review them. 3. FDIC did not use secure e-mail methods to protect the integrity of certain accounting data transferred over an internal communication network. 4. FDIC did not adequately control physical access to the Virginia Square computer processing facility. 5. FDIC did not apply physical security controls for some instances. For example, an unauthorized visitor was able to enter a key FDIC facility without providing proof of identity, signing a visitor log, obtaining a visitor’s badge, or being escorted. 6. FDIC did not apply physical security controls for some instances. For example, a workstation that had access to a payroll system was located in an unsecured office. Configuration management (formerly application change control) 7. Procedures have not been consistently followed for authorizing, documenting, and reviewing all application software changes. 8. FDIC did not consistently implement configuration management controls for NFE. Specifically, the corporation did not develop and maintain a complete listing of all configuration items and a baseline configuration for NFE, including application software, data files, software development tools, hardware, and documentation. 9. FDIC did not ensure that all significant system changes, such as parameter changes, go through a change control process. 10. FDIC did not apply comprehensive patches to system software in a timely manner. 11. FDIC did not review status accounting reports, or perform complete functional and physical configuration audits. 12. FDIC did not update or control documents to reflect the current state of the environment and to ensure consistency with related documents. 13. FDIC did not properly segregate incompatible system-related functions, duties, and capacities for an individual associated with NFE. Security management (formerly information security program) 14. FDIC has documented various policies for establishing effective information security controls; however, the corporation has not consistently implemented them. 15. FDIC did not integrate the security plans or requirements for certain nonmajor applications into the security plan for the general support system. Two of FDIC’s nonmajor applications, the corporation’s human resources and time and attendance systems, are not included in FDIC general support systems security plans. 16. FDIC did not effectively implement or accurately report the status of its remedial actions. 17. FDIC did not update its business impact analysis to reflect the significant changes resulting from the implementation of NFE. 18. The risk assessment for FDIC’s NFE was not properly updated. 19. The corporation did not update the system security plan for NFE. 20. The corporation did not always review events occurring in NFE to determine whether the events were computer security incidents or not. 21. FDIC’s NFE contingency plan was not updated to reflect the new disaster recovery site. In addition, the plan identified servers that were not in use. In addition to the individuals named above, William F. Wadsworth (Assistant Director), Angela M. Bell, Neil J. Doherty, Patrick R. Dugan, Mickie E. Gray, David B. Hayes, Tammi L. Nguyen, Eugene E. Stevens IV, Amos A. Tevelow, and Jayne L. Wilson made key contributions to this report.
The Federal Deposit Insurance Corporation (FDIC) has a demanding responsibility enforcing banking laws, regulating financial institutions, and protecting depositors. Effective information security controls are essential to ensure that FDIC systems and information are adequately protected from inadvertent misuse, fraudulent, or improper disclosure. As part of its audit of FDIC's 2007 financial statements, GAO assessed (1) the progress FDIC has made in mitigating previously reported information security weaknesses and (2) the effectiveness of FDIC's controls in protecting the confidentiality, integrity, and availability of its financial systems and information. To do this, GAO examined security policies, procedures, reports, and other documents; observed controls over key financial applications; and interviewed key FDIC personnel. FDIC has made significant progress in mitigating previously reported information security weaknesses. Specifically, it has corrected or mitigated 16 of the 21 weaknesses that GAO had previously reported as unresolved at the completion of the 2006 audit. For example, FDIC has improved physical security controls over access to its Virginia Square computer processing facility, instructed personnel to use more secure e-mail methods to protect the integrity of certain accounting data transferred over an internal communication network, and updated the security plan and contingency plan of a key financial system. In addition, FDIC stated it has initiated and completed some actions to mitigate the remaining five prior weaknesses. However, we have not verified that these actions have been completed. Although FDIC has made significant progress improving its information system controls, old and new weaknesses could limit the corporation's ability to effectively protect the confidentiality, integrity, and availability of its financial systems and information. In addition to the five previously reported weaknesses that remain unresolved, newly identified weaknesses in access controls and configuration management controls introduce risk to two key financial systems. For example, FDIC did not always implement adequate access controls. Specifically, multiple FDIC users shared the same login ID and password, had unrestricted access to application source code, and used passwords that were not adequately encrypted. In addition, FDIC did not adequately (1) maintain a full and complete baseline for system requirements; (2) assign unique identifiers to configuration items; (3) authorize, document, and report all configuration changes; and (4) perform configuration audits. Although these weaknesses do not pose significant risk of misstatement of the corporation's financial statements, they do increase preventable risk to the corporation's financial systems and information. A key reason for these weaknesses is that FDIC did not always fully implement key information security program activities. For example, it did not adequately conduct configuration control testing or complete the remedial action plan in a timely manner and did not include necessary and key information. Until FDIC fully performs key information security program activities, its ability to maintain adequate control over its financial systems and information will be limited.
Collecting information is one way that federal agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the correct amount of taxes owed. The U.S. Census Bureau collects information used to apportion congressional representation and for many other purposes. When new circumstances or needs arise, agencies may need to collect new information. We recognize, therefore, that a large portion of federal paperwork is necessary and serves a useful purpose. Nonetheless, besides ensuring that information collections have public benefit and utility, federal agencies are required by the PRA to minimize the paperwork burden that they impose. Among the provisions of the act aimed at this purpose are requirements for the review of information collections by OMB and by agency CIOs. Under PRA, federal agencies may not conduct or sponsor the collection of information unless approved by OMB. OMB is required to determine that the agency collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility. Consistent with the act’s requirements, OMB has established a process to review all proposals by executive branch agencies (including independent regulatory agencies) to collect information from 10 or more persons, whether the collections are voluntary or mandatory. In addition, the act as amended in 1995 requires every agency to establish a process under the official responsible for the act’s implementation (now the agency’s CIO) to review program offices’ proposed collections. This official is to be sufficiently independent of program responsibility to evaluate fairly whether information collections should be approved. Under the law, the CIO is to review each collection of information before submission to OMB, including reviewing the program office’s evaluation of the need for the collection and its plan for the efficient and effective management and use of the information to be collected, including necessary resources. As part of that review, the agency CIO must ensure that each information collection instrument (form, survey, or questionnaire) complies with the act. The CIO is also to certify that the collection meets 10 standards (see table 1) and to provide support for these certifications. The paperwork clearance process currently takes place in two stages. The first stage is CIO review. During this review, the agency is to publish a notice of the collection in the Federal Register. The public must be given a 60-day period in which to submit comments, and the agency is to otherwise consult with interested or affected parties about the proposed collection. At the conclusion of the agency review, the CIO submits the proposal to OMB for review. The agency submissions to OMB typically include a copy of the data collection instrument (e.g., a form or survey) and an OMB submission form providing information (with supporting documentation) about the proposed information collection, including why the collection is necessary, whether it is new or an extension of a currently approved collection, whether it is voluntary or mandatory, and the estimated burden hours. Included in the submission is the certification by the CIO or the CIO’s designee that the collection satisfies the 10 standards. The OMB review is the second stage in the clearance process. This review may involve consultation between OMB and agency staff. During the review, a second notice is published in the Federal Register, this time with a 30-day period for soliciting public comment. At the end of this period, OMB makes its decision and informs the agency. OMB maintains on its Web site a list of all approved collections and their currently valid control numbers, including the form numbers approved under each collection. The 1995 PRA amendments also require OMB to set specific goals for reducing burden from the level it had reached in 1995: at least a 10 percent reduction in the governmentwide burden-hour estimate for each of fiscal years 1996 and 1997, a 5 percent governmentwide burden reduction goal in each of the next 4 fiscal years, and annual agency goals that reduce burden to the “maximum practicable opportunity.” At the end of fiscal year 1995, federal agencies estimated that their information collections imposed about 7 billion burden hours on the public. Thus, for these reduction goals to be met, the burden-hour estimate would have had to decrease by about 35 percent, to about 4.6 billion hours, by September 30, 2001. In fact, on that date, the federal paperwork estimate had increased by about 9 percent, to 7.6 billion burden hours. As of March 2006, OMB’s estimate for governmentwide burden is about 10.5 billion hours— about 2.5 billion hours more than the estimate of 7.971 billion hours at the end of fiscal year 2004. Over the years, we have reported on the implementation of PRA many times. In a succession of reports and testimonies, we noted that federal paperwork burden estimates generally continued to increase, rather than decrease as envisioned by the burden reduction goals in PRA. Further, we reported that some burden reduction claims were overstated. For example, although some reported paperwork reductions reflected substantive program changes, others were revisions to agencies’ previous burden estimates and, therefore, would have no effect on the paperwork burden felt by the public. In our previous work, we also repeatedly pointed out ways that OMB and agencies could do more to ensure compliance with PRA. In particular, we have often recommended that OMB and agencies take actions to improve the paperwork clearance process. Governmentwide, agency CIOs generally reviewed information collections before they were submitted to OMB and certified that the 10 standards in the act were met. However, in our 12 case studies, CIOs provided these certifications despite often missing or partial support from the program offices sponsoring the collections. Further, although the law requires CIOs to provide support for certifications, agency files contained little evidence that CIO reviewers had made efforts to get program offices to improve the support that they offered. Numerous factors have contributed to these conditions, including a lack of management support and weaknesses in OMB guidance. Without appropriate support and public consultation, agencies have reduced assurance that collections satisfy the standards in the act. Among the PRA provisions intended to help achieve the goals of minimizing burden while maximizing utility are the requirements for CIO review and certification of information collections. The 1995 amendments required agencies to establish centralized processes for reviewing proposed information collections within the CIO’s office. Among other things, the CIO’s office is to certify, for each collection, that the 10 standards in the act have been met, and the CIO is to provide a record supporting these certifications. The four agencies in our review all had written directives that implemented the review requirements in the act, including the requirement for CIOs to certify that the 10 standards in the act were met. The estimated certification rate ranged from 100 percent at IRS and HUD to 92 percent at VA. Governmentwide, agencies certified that the act’s 10 standards had been met on an estimated 98 percent of the 8,211 collections. However, in the 12 case studies that we reviewed, this CIO certification occurred despite a lack of rigorous support that all standards were met. Specifically, the support for certification was missing or partial on 65 percent (66 of 101) of the certifications. Table 4 shows the result of our analysis of the case studies. For example, under the act, CIOs are required to certify that each information collection is not unnecessarily duplicative. According to OMB instructions, agencies are to (1) describe efforts to identify duplication and (2) show specifically why any similar information already available cannot be used or modified for the purpose described. Program reviews were conducted to identify potential areas of duplication; however, none were found to exist. There is no known Department or Agency which maintains the necessary information, nor is it available from other sources within our Department. is a new, nationwide service that does not duplicate any single existing service that attempts to match employers with providers who refer job candidates with disabilities. While similar job-referral services exist at the state level, and some nation-wide disability organizations offer similar services to people with certain disabilities, we are not aware of any existing survey that would duplicate the scope or content of the proposed data collection. Furthermore, because this information collection involves only providers and employers interested in participating in the EARN service, and because this is a new service, a duplicate data set does not exist. While this example shows that the agency attempted to identify duplicative sources, it does not discuss why information from state and other disability organizations could not be aggregated and used, at least in part, to satisfy the needs of this collection. We have attempted to eliminate duplication within the agency wherever possible. This assertion provides no information on what efforts were made to identify duplication or perspective on why similar information, if any, could not be used. Further, the files contained no evidence that the CIO reviewers challenged the adequacy of this support or provided support of their own to justify their certification. A second example is provided by the standard requiring each information collection to reduce burden on the public, including small entities, to the extent practicable and appropriate. OMB guidance emphasizes that agencies are to demonstrate that they have taken every reasonable step to ensure that the collection of information is the least burdensome necessary for the proper performance of agency functions. In addition, OMB instructions and guidance direct agencies to provide specific information and justifications: (1) estimates of the hour and cost burden of the collections and (2) justifications for any collection that requires respondents to report more often than quarterly, respond in fewer than 30 days, or provide more than an original and two copies of documentation. With regard to small entities, OMB guidance states that the standard emphasizes such entities because these often have limited resources to comply with information collections. The act cites various techniques for reducing burden on these small entities, and the guidance includes techniques that might be used to simplify requirements for small entities, such as asking fewer questions, taking smaller samples than for larger entities, and requiring small entities to provide information less frequently. Our review of the case examples found that for the first part of the certification, which focuses on reducing burden on the public, the files generally contained the specific information and justifications called for in the guidance. However, none of the case examples contained support that addressed how the agency ensured that the collection was the least burdensome necessary. According to agency CIO officials, the primary cause for this absence of support is that OMB instructions and guidance do not direct agencies to provide this information explicitly as part of the approval package. For the part of the certification that focuses on small businesses, our governmentwide sample included examples of various agency activities that are consistent with this standard. For instance, Labor officials exempted 6 million small businesses from filing an annual report; telephoned small businesses and other small entities to assist them in completing a questionnaire; reduced the number of small businesses surveyed; and scheduled fewer compliance evaluations on small contractors. For four of our case studies, however, complete information that would support certification of this part of the standard was not available. Seven of the 12 case studies involved collections that were reported to impact businesses or other for-profit entities, but for 4 of the 7, the files did not explain either ● why small businesses were not affected or ● even though such businesses were affected, that burden could or could not be reduced. Referring to methods used to minimize burden on small business, the files included statements such as “not applicable.” These statements do not inform the reviewer whether there was an effort made to reduce burden on small entities or not. When we asked agencies about these four cases, they indicated that the collections did, in fact, affect small business. OMB’s instructions to agencies on this part of the certification require agencies to describe any methods used to reduce burden only if the collection of information has a “significant economic impact on a substantial number of small entities.” This does not appropriately reflect the act’s requirements concerning small business: the act requires that the CIO certify that the information collection reduces burden on small entities in general, to the extent practical and appropriate, and provides no thresholds for the level of economic impact or the number of small entities affected. OMB officials acknowledged that their instruction is an “artifact” from a previous form and more properly focuses on rulemaking rather than the information collection process. The lack of support for these certifications appears to be influenced by a variety of factors. In some cases, as described above, OMB guidance and instructions are not comprehensive or entirely accurate. In the case of the duplication standard specifically, IRS officials said that the agency does not need to further justify that its collections are not duplicative because (1) tax data are not collected by other agencies, so there is no need for the agency to contact them about proposed collections, and (2) IRS has an effective internal process for coordinating proposed forms among the agency’s various organizations that may have similar information. Nonetheless, the law and instructions require support for these certifications, which was not provided. In addition, agency reviewers told us that management assigns a relatively low priority and few resources to reviewing information collections. Further, program offices have little knowledge of and appreciation for the requirements of the PRA. As a result of these conditions and a lack of detailed program knowledge, reviewers often have insufficient leverage with program offices to encourage them to improve their justifications. When support for the PRA certifications is missing or inadequate, OMB, the agency, and the public have reduced assurance that the standards in the act, such as those on avoiding duplication and minimizing burden, have been consistently met. IRS and EPA have supplemented the standard PRA review process with additional processes aimed at reducing burden while maximizing utility. These agencies’ missions require them both to deal extensively with information collections, and their management has made reduction of burden a priority. In January 2002, the IRS Commissioner established an Office of Taxpayer Burden Reduction, which includes both permanently assigned staff and staff temporarily detailed from program offices that are responsible for particular information collections. This office chooses a few forms each year that are judged to have the greatest potential for burden reduction (these forms have already been reviewed and approved through the CIO process). The office evaluates and prioritizes burden reduction initiatives by ● determining the number of taxpayers impacted; ● quantifying the total time and out-of-pocket savings for taxpayers; ● evaluating any adverse impact on IRS’s voluntary compliance ● assessing the feasibility of the initiative, given IRS resource ● tying the initiative into IRS objectives. Once the forms are chosen, the office performs highly detailed, in- depth analyses, including extensive outreach to the public affected, the users of the information within and outside the agency, and other stakeholders. This analysis includes an examination of the need for each data element requested. In addition, the office thoroughly reviews form design. The office’s Director heads a Taxpayer Burden Reduction Council, which serves as a forum for achieving taxpayer burden reduction throughout IRS. IRS reports that as many as 100 staff across IRS and other agencies can be involved in burden reduction initiatives, including other federal agencies, state agencies, tax practitioner groups, taxpayer advocacy panels, and groups representing the small business community. The council directs its efforts in five major areas: ● simplifying forms and publications; ● streamlining internal policies, processes, and procedures; ● promoting consideration of burden reductions in rulings, regulations, and laws; ● assisting in the development of burden reduction measurement ● partnering with internal and external stakeholders to identify areas of potential burden reduction. IRS reports that this targeted, resource-intensive process has achieved significant reductions in burden: over 200 million burden hours since 2002. For example, it reports that about 95 million hours of taxpayer burden were reduced through increases in the income- reporting threshold on various IRS schedules. Another burden reduction initiative includes a review of the forms that 15 million taxpayers use to request an extension to the date for filing their tax returns. Similarly, EPA officials stated that they have established processes for reviewing information collections that supplement the standard PRA review process. These processes are highly detailed and evaluative, with a focus on burden reduction, avoiding duplication, and ensuring compliance with PRA. According to EPA officials, the impetus for establishing these processes was the high visibility of the agency’s information collections and the recognition, among other things, that the success of EPA’s enforcement mission depended on information collections being properly justified and approved: in the words of one official, information collections are the “life blood” of the agency. According to these officials, the CIO staff are not generally closely involved in burden reduction initiatives, because they do not have sufficient technical program expertise and cannot devote the extensive time required. Instead, these officials said that the CIO staff’s focus is on fostering high awareness within the agency of the requirements associated with information collections, educating and training the program office staff on the need to minimize burden and the impact on respondents, providing an agencywide perspective on information collections to help avoid duplication, managing the clearance process for agency information collections, and acting as liaison between program offices and OMB during the clearance process. To help program offices consider PRA requirements such as burden reduction and avoiding duplication as they are developing new information collections or working on reauthorizing existing collections, the CIO staff also developed a handbook to help program staff understand what they need to do to comply with PRA and gain OMB approval. In addition, program offices at EPA have taken on burden reduction initiatives that are highly detailed and lengthy (sometimes lasting years) and that involve extensive consultation with stakeholders (including entities that supply the information, citizens groups, information users and technical experts in the agency and elsewhere, and state and local governments). For example, EPA reports that it amended its regulations to reduce the paperwork burden imposed under the Resource Conservation and Recovery Act. One burden reduction method EPA used was to establish higher thresholds for small businesses to report information required under the act. EPA estimates that the initiative will reduce burden by 350,000 hours and save $22 million annually. Another EPA program office reports that it is proposing a significant reduction in burden for its Toxic Release Inventory program. Both the EPA and IRS programs involve extensive outreach to stakeholders, including the public. This outreach is particularly significant in view of the relatively low levels of public consultation that occur under the standard review process. As we reported in May 2005, public consultation on information collections is often limited to publication of notices in the Federal Register. As a means of public consultation, however, these notices are not effective, as they elicit few responses. An estimated 7 percent of the 60-day notices of collections in the Federal Register received one or more comments. According to our sample of all collections at the four agencies reviewed, the number of notices receiving at least one comment ranged from an estimated 15 percent at Labor to an estimated 6 percent at IRS. In contrast, according to EPA and IRS, their efforts at public consultation are key to their burden reduction efforts and an important factor in their success. Overall, EPA and IRS reported that their targeted processes produced significant reductions in burden by making a commitment to this goal and dedicating resources to it. In contrast, for the 12 information collections we examined, the CIO review process resulted in no reduction in burden. Further, the Department of Labor reported that its PRA reviews of 175 proposed collections over nearly 2 years did not reduce burden. Similarly, both IRS and EPA addressed information collections that had undergone CIO review and received OMB approval and nonetheless found significant opportunities to reduce burden. In our 2005 report, we concluded that the CIO review process was not working as Congress intended: It did not result in a rigorous examination of the burden imposed by information collections, and it did not lead to reductions in burden. In light of these findings, we recommended (among other things) that agencies strengthen the support provided for CIO certifications and that OMB update its guidance to clarify and emphasize this requirement. Since our report was issued, the four agencies have reported taking steps to strengthen their support for CIO certifications: ● According to the HUD CIO, the department established a senior- level PRA compliance officer in each major program office, and it has revised its certification process to require that before collections are submitted for review, they be approved at a higher management level within program offices. ● The Treasury CIO established an Information Management Sub- Council under the Treasury CIO Council and added resources to the review process. ● According to the VA’s 2007 budget submission, the department obtained additional resources to help review and analyze its information collection requests. ● According to the Office of the CIO at the Department of Labor, the department intends to provide guidance to components regarding the need to provide strong support for clearance requests and has met with component staff to discuss these issues. OMB reported that its guidance to agencies will be updated through a planned automated system, which is expected to be operational by the end of this year. According to the acting head of OMB’s Office of Information and Regulatory Affairs, the new system will permit agencies to submit clearance requests electronically, and the instructions will provide clear guidance on the requirements for these submissions, including the support required. This official stated that OMB has worked with agency representatives with direct knowledge of the PRA clearance process in order to ensure that the system and its instructions clearly reflect the requirements of the process. If this system is implemented as described and OMB withholds clearance from submissions that lack adequate support, it could lead agencies to strengthen the support provided for their certifications. In considering PRA reauthorization, the Congress has the opportunity to take into account ideas that were developed by the various experts at the PRA forum that we organized in 2005. These experts noted, as we have here, that the burden reduction goals in the act have not been met, and that in fact burden has been going up. They suggested first that the goal of reducing burden by 5 percent is not realistic, and also that such numerical goals do not appropriately recognize that some burden is necessary. The important point, in their view, is to reduce unnecessary burden while still ensuring maximum utility. Forum participants also questioned the level of attention that OMB devotes to the process of clearing collections on what they called a “retail” basis, focusing on individual collections rather than looking across numerous collections. In their view, some of this attention would be better devoted to broader oversight questions. In their discussion, participants mentioned that the clearance process informs OMB with respect to its other information resource management functions, but that this had not led to high-level integration and coordination. It was suggested that the volume of collections to be individually reviewed could impede such integration. Participants made a number of suggestions regarding ways to reduce the volume of collections that OMB reviews, with the goal of freeing OMB resources so that it could address more substantive, wide-ranging paperwork issues. Options that they suggested including limiting OMB review to significant and selected collections, rather than all collections. This would entail shifting more responsibility for review to the agencies, which they stated was one of the avowed purposes of the 1995 amendments: to increase agencies’ attention to properly clearing information collection requests. One way to shift this responsibility, the forum suggested, would be for OMB to be more creative in its use of the delegation authority that the act provides. (Under the act, OMB has the authority to delegate to agencies the authority to approve collections in various circumstances.) Also, participants mentioned the possibility of modifying the clearance process by, for example, extending beyond 3 years the length of time that OMB approvals are valid, particularly for the routine types of collections. This suggestion was paired with the idea that the review process itself should be more rigorous; as the panel put it, “now it’s a rather pro forma process.” They also observed that two Federal Register notices seemed excessive in most cases. To reduce the number of collections that require OMB review, another possibility suggested was to revise the PRA’s definition of an information collection. For example, the current definition includes all collections that contact 10 or more persons; the panel suggested that this threshold could be raised, pointing out that this low threshold makes it hard for agencies to perform targeted outreach to the public regarding burden and other issues (such as through customer satisfaction questionnaires or focus groups). However, they had no specific recommendation on what the number should be. Alternatively, they suggested that OMB could be given authority to categorize types of information collections that did not require clearance (for example, OMB could exempt collections for which the response is purely voluntary). Finally, the forum questioned giving agency CIOs the responsibility for reviewing information collections. According to the forum, CIOs have tended to be more associated with information technology issues than with high level information policy. Our previous work has not addressed every topic raised by the forum, so we cannot argue for or against all these suggestions. However, the work in our May 2005 report is consistent with the forum’s observations in some areas, including the lack of rigor in the review process and the questionable need for two Federal Register notices. I would like to turn here, Madam Chairman, to the matters for congressional consideration that we included in that report. We observed that to achieve burden reduction, the targeted approaches used by IRS and EPA were a promising alternative. However, the agencies’ experiences also suggest that to make such approaches successful requires top-level executive commitment, extensive involvement of program office staff with appropriate expertise, and aggressive outreach to stakeholders. Indications are that such an approach would also be more resource-intensive than the current process. Moreover, such an approach may not be warranted at all agencies, since not al agencies have the level of paperwork issues that face IRS and similar agencies. On the basis of the conclusions in our May 2005 report, we suggested that the Congress consider mandating the development of pilot projects to test and review the value of approaches to burden reduction similar to those used by IRS and EPA. OMB would issue guidance to agencies on implementing such pilots, including criteria for assessing collections along the lines of the process currently employed by IRS. According to our suggestion, agencies participating in such pilots would submit to OMB and publish on their Web sites (or through other means) an annual plan on the collections targeted for review, specific burden reduction goals for those collections, and a report on reductions achieved to date. We also suggested that in view of the limited effectiveness of the 60-day notice in the Federal Register in eliciting public comment, this requirement could be eliminated. Under a pilot project approach, an agency would develop a process to examine its information collections for opportunities to reduce burden. The experiences at IRS and EPA show that targeted burden reduction efforts depend on tapping the expertise of program staff, who are generally closely involved in the effort. That is, finding opportunities to reduce burden requires strong familiarity with the programs involved. Pilot projects would be expected to build on the lessons learned at IRS and EPA. For example, these agencies have used a variety of approaches to reducing burden, such as ● sharing information—for example, by facilitating cross-agency ● standardizing data for multiple uses (“collect once—use multiple integrating data to avoid duplication; and ● re-engineering work flows. Pilot projects would be most appropriate for agencies for which information collections are a significant aspect of the mission. As the results and lessons from the pilots become available, OMB may choose to apply them at other agencies by approving further pilots. Lessons learned from the mandated pilots could thus be applied more broadly. In developing processes to involve program offices in burden reduction, agencies would not have to impose a particular organizational structure for the burden reduction effort. For instance, the burden reduction effort might not necessarily be performed by the CIO. For example, at IRS, the Office of Burden Reduction is not connected to the CIO, whereas at EPA, CIO staff are involved in promoting burden reduction through staff education and outreach. However, the EPA CIO depends on program offices to undertake specific initiatives. Under a mandate for pilot projects, agencies would be encouraged to determine the approach that works best in their own situations. Finally, both IRS and EPA engaged in extensive outreach to the public and stakeholders. In many cases, this outreach involves contacts with professional and industry organizations, which are particularly valuable because they allow the agencies to get feedback without the need to design an information collection for the purpose (which would entail its own review process, burden estimate, and so on). According to agency officials, the need to obtain OMB approval for an information collection if they contact more than nine people often inhibits agencies’ use of questionnaires and similar types of active outreach to the public. Agencies are free, however, to collect comments on information posted on Web sites. OMB could also choose to delegate to pilot project agencies the authority to approve collections that are undertaken as part of public outreach for burden reduction projects. The work we reported in May and June 2005 strongly suggested that despite the importance of public consultation to burden reduction, the current approach is often ineffective. Federal Register notices elicit such low response that we questioned the need for two such notices (the 60-day notice during the agency review and the 30-day notice during the OMB review). Eliminating the first notice, in our view, is thus not likely to decrease public consultation in any significant way. Instead, our intent was for agencies, through pilot projects, to explore ways to perform outreach to information collection stakeholders, including the public, that will be more effective in eliciting useful comments and achieving real reductions in burden. In summary, Madam Chairman, the information collection review process appeared to have little effect on paperwork burden. As our review showed, the CIO review process, as currently implemented, tended to lack rigor, allowing agencies to focus on clearing an administrative hurdle rather than on performing substantive analysis. Going further, the expert forum characterized the whole clearance process as “pro forma.” The forum also made various suggestions for improving the clearance process; many of these were aimed at finding ways to reduce its absorption of OMB resources, such as by changing the definition of an information collection. Both we and the forum suggested removing one of the current administrative hurdles (the 60-day Federal Register notice). Although these suggestions refer to specific process improvements, the main point is not just to tweak the process. Instead, the intent is to remove administrative impediments, with the ultimate aim of refocusing agency and OMB attention away from the current concentration on administrative procedures and toward the goals of the act—minimizing burden while maximizing utility. To that end, we suggested that the Congress mandate pilot projects that are specifically targeted at reducing burden. Such projects could help to move toward the outcomes that the Congress intended in enacting PRA. Madam Chairman, this completes my prepared statement. I would be pleased to answer any questions. For future information regarding this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6420, or koontzl@gao.gov. Other individuals who made key contributions to this testimony were Timothy Bober, Barbara Collier, David Plocher, Elizabeth Powell, J. Michael Resser, and Alan Stapleton. Associate Executive Director Association of Research Libraries Princeton University; formerly National Opinion Research Center Founder and Executive Director, OMB Watch Administrative Law & Information Policy; formerly OMB and IRS Privacy Officer Vice President, SRA International; formerly OMB Director, Center for Technology in Government, University at Albany, State University of New York Partner, Guerra, Kiviat, Flyzik and Associates; formerly Department of Treasury and Secret Service Privacy & Information Policy Consultant; formerly counsel to the House of Representatives’ Subcommittee on Information, Justice, Transportation and Agriculture Editor of Access Reports, FOIA Journal Independent Consultant; formerly OMB Professor and Independent Consultant; formerly OMB Manager, Regulatory Policy National Federation of Independent Business Director, Public Policy, Software & Information Industry Association Senior Scientist, Computer Science and Telecommunications Board, The National Academies Fellow in Law and Government, American University, Washington College of Law; formerly Administrative Conference of the U.S. At the forum, others attending included GAO staff and a number of other observers: Deputy Administrator, Office of Information and Regulatory Affairs Minority Senior Legislative Counsel, House Committee on Government Reform Minority Counsel, House Committee on Government Reform Policy Analyst, Office of Information and Regulatory Affairs Director of Energy and Environment, U.S. Chamber of Commerce Senior Policy Analyst, Office of Management and Budget Policy Analyst, Office of Management and Budget Minority Professional Staff Member, House Committee on Government Reform Counsel, House Committee on Government Reform Policy Analyst, Office of Management and Budget Office of Information and Regulatory Affairs Senior Professional Staff Member, House Committee on Government Reform House Committee on Government Reform Branch Chief, Statistics Branch, Office of Information and Regulatory Affairs In addition, staff of the National Academies’ National Research Council, Computer Science and Telecommunications Board, helped to develop and facilitate the forum: Charles Brownstein, Director; Kristen Batch, Research Associate; and Margaret Huynh, Senior Program Assistant. Paperwork Reduction Act: Subcommittee Questions Concerning the Act’s Information Collection Provisions. GAO-05-909R. Washington, D.C.: July 19, 2005. Paperwork Reduction Act: Burden Reduction May Require a New Approach. GAO-05-778T. Washington, D.C.: June 14, 2005. Paperwork Reduction Act: New Approach May Be Needed to Reduce Government Burden on Public. GAO-05-424. Washington, D.C.: May 20, 2005. Paperwork Reduction Act: Agencies’ Paperwork Burden Estimates Due to Federal Actions Continue to Increase. GAO-04-676T. Washington, D.C.: April 20, 2004. Paperwork Reduction Act: Record Increase in Agencies’ Burden Estimates. GAO-03-619T. Washington, D.C.: April 11, 2003. Paperwork Reduction Act: Changes Needed to Annual Report. GAO-02-651R. Washington, D.C.: April 29, 2002. Paperwork Reduction Act: Burden Increases and Violations Persist. GAO-02-598T. Washington, D.C.: April 11, 2002. Information Resources Management: Comprehensive Strategic Plan Needed to Address Mounting Challenges. GAO-02-292. Washington, D.C.: February 22, 2002. Paperwork Reduction Act: Burden Estimates Continue to Increase. GAO-01-648T. Washington, D.C.: April 24, 2001. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Tax Administration: IRS Is Working to Improve Its Estimates of Compliance Burden. GAO/GGD-00-11. Washington, D.C.: May 22, 2000. Paperwork Reduction Act: Burden Increases at IRS and Other Agencies. GAO/T-GGD-00-114. Washington, D.C.: April 12, 2000. EPA Paperwork: Burden Estimate Increasing Despite Reduction Claims. GAO/GGD-00-59. Washington, D.C.: March 16, 2000. Federal Paperwork: General Purpose Statistics and Research Surveys of Businesses. GAO/GGD-99-169. Washington, D.C.: September 20, 1999. Paperwork Reduction Act: Burden Increases and Unauthorized Information Collections. GAO/T-GGD-99-78. Washington, D.C.: April 15, 1999. Paperwork Reduction Act: Implementation at IRS. GAO/GGD-99-4. Washington, D.C.: November 16, 1998. Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act. GAO/GGD- 98-120. Washington, D.C.: July 9, 1998. Paperwork Reduction: Information on OMB’s and Agencies’ Actions. GAO/GGD-97-143R. Washington, D.C.: June 25, 1997. Paperwork Reduction: Governmentwide Goals Unlikely to Be Met. GAO/T-GGD-97-114. Washington, D.C.: June 4, 1997. Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met. GAO/T-GGD/RCED-96-186. Washington, D.C.: June 5, 1996. Environmental Protection: Assessing EPA’s Progress in Paperwork Reduction. GAO/T-RCED-96-107. Washington, D.C.: March 21, 1996. Paperwork Reduction: Burden Hour Increases Reflect New Estimates, Not Actual Changes. GAO/PEMD-94-3. Washington, D.C.: December 6, 1993. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Americans spend billions of hours each year providing information to federal agencies by filling out forms, surveys, or questionnaires. A major aim of the Paperwork Reduction Act (PRA) is to minimize the burden that these information collections impose on the public, while maximizing their public benefit. Under the act, the Office of Management and Budget (OMB) is to approve all such collections. In addition, agency Chief Information Officers (CIO) are to review information collections before they are submitted to OMB for approval and certify that these meet certain standards set forth in the act. GAO was asked to testify on the implementation of the act's provisions regarding the review and approval of information collections. For its testimony, GAO reviewed previous work in this area, including the results of an expert forum on information resources management and the PRA, which was held in February 2005 under the auspices of the National Research Council. GAO also drew on its earlier study of CIO review processes (GAO-05-424) and alternative processes that two agencies have used to minimize burden. For this study, GAO reviewed a governmentwide sample of collections, reviewed processes and collections at four agencies that account for a large proportion of burden, and performed case studies of 12 approved collections. Among the PRA provisions aimed at helping to achieve the goals of minimizing burden while maximizing utility is the requirement for CIO review and certification of information collections. GAO's review of 12 case studies showed that CIOs provided these certifications despite often missing or inadequate support from the program offices sponsoring the collections. Further, although the law requires that support be provided for certifications, agency files contained little evidence that CIO reviewers had made efforts to get program offices to improve the support they offered. Numerous factors have contributed to these problems, including a lack of management support and weaknesses in OMB guidance. Because these reviews were not rigorous, OMB, the agency, and the public had reduced assurance that the standards in the act--such as minimizing burden--were consistently met. To address the issues raised by its review, GAO made recommendations to the agencies and OMB aimed at strengthening the CIO review process and clarifying guidance. OMB and the agencies report making plans and taking steps to address GAO's recommendations. Beyond the collection review process, the Internal Revenue Service (IRS) and the Environmental Protection Agency (EPA) have set up processes that are specifically focused on reducing burden. These agencies, whose missions involve numerous information collections, have devoted significant resources to targeted burden reduction efforts that involve extensive public outreach. According to the two agencies, these efforts led to significant reductions in burden. For example, each year, IRS subjects a few forms to highly detailed, in-depth analyses, reviewing all data requested, redesigning forms, and involving stakeholders (both the information users and the public affected). IRS reports that this process--performed on forms that have undergone CIO review and received OMB approval--has reduced burden by over 200 million hours since 2002. In contrast, for the 12 case studies, the CIO review process did not reduce burden. When it considers PRA reauthorization, the Congress has the opportunity to promote new approaches, including alternatives suggested by the expert forum and by GAO. Forum participants made a range of suggestions on information collections and their review. For example, they suggested that OMB's focus should be on broad oversight rather than on reviewing each individual collection and observed that the current clearance process appeared to be "pro forma." They also observed that it seemed excessive to require notices of collections to be published twice in the Federal Register, as they are now. GAO similarly observed that publishing two notices in the Federal Register did not seem to be effective, and suggested eliminating one of these notices. GAO also suggested that the Congress mandate pilot projects to target some collections for rigorous analysis along the lines of the IRS and EPA approaches. Such projects would permit agencies to build on the lessons learned by the IRS and EPA and potentially contribute to true burden reduction.
Federal grants use various sources of population counts in their funding formulas. First, the Bureau conducts the decennial census, which provides population counts once every 10 years, and also estimates the population for the years between censuses—known as postcensal estimates. For example, the SSBG allocation formula uses the most recent postcensal population estimates to distribute funds. Second, the Bureau’s American Community Survey provides detailed annual data on socioeconomic characteristics for the nation’s communities and is used to allocate federal funds for such programs as the Section 8 housing voucher program, an effort aimed at increasing affordable housing choices for very low-income households. In addition, the Current Population Survey conducted by the Bureau for the Bureau of Labor Statistics provides monthly data and is used to allocate funds for programs under the Workforce Investment Act of 1998, which provides workforce development services to employers and workers. Among funding formulas that rely on population data, the degree of reliance varies. On the one hand, the SSBG formula allocates funding based on a state’s population relative to the total U.S. population. On the other hand, some formulas use population plus one or more other variables to determine funding levels. Medicaid, for example, uses population counts and income to determine the federal reimbursement rate. The Medicaid formula is based on the ratio of a state’s aggregate personal income to the same state’s population relative to aggregate U.S. per capita personal income. Other grant programs, such as CDBG, are driven by multiple factors in addition to population such as poverty, housing overcrowding, and the age of the housing. Population plays a more limited role in other programs. Federal assistance under one part of the CARE Act does not use census population counts in its funding formula. Rather, census population counts are used in this part as criteria for program eligibility—CARE Act funds under this part are awarded to urbanized areas, which are determined in part by census population counts. The actual amount of federal assistance is based on the counts of people with HIV/AIDS. On the basis of simulations we conducted of formula grant allocations, we found that changes in population counts can affect, albeit modestly, the allocations of federal funds for the programs analyzed. Note that these simulations were for illustrative purposes only—to demonstrate the effect that alternative population estimates could have on selected federal grant programs. In two prior reports, we simulated the reallocations that would have resulted from using alternative population counts for Medicaid allocations. In our 2003 report, based on population estimates that differed from the 2000 Census count by about 3.2 percent across the U.S. and varying state by state, we found that of the $110.9 billion total federal Medicaid spending in 2002, 18 states would have shared an additional $377.0 million in Medicaid funding, 21 states would have lost a collective $363.2 million, and 11 states and the District of Columbia would have had their funding unchanged. In our 2006 report, based on population estimates that differed from the 2000 Census count by about 0.5 percent across the U.S. and varying state by state, we found that of the $159.7 billion total federal Medicaid funding in 2004, 22 states would have shared an additional $208.5 million in Medicaid funding, 17 states would have lost a total of $368 million, and 11 states and the District of Columbia would have had their funding unchanged. In total, 0.2 percent of Medicaid funds would have shifted as a result of the simulation. In our 2006 report, we also simulated the reallocations of SSBG funding and found that of the $1.7 billion in SSBG allocations, 27 states and the District of Columbia would have shared a gain of $4.2 million and 23 states would have shared a loss of $4.2 million. In total 0.2 percent of SSBG funds would have shifted as a result of the simulation. Since the completeness and accuracy of population data can modestly affect grant funding streams and other applications of census data, the Bureau has used a variety of programs to address possible errors in population counts and estimates. Not all of these programs are completed by December 31 of the decennial year—the date on which population data are to be sent to the President for purposes of congressional apportionment. Corrections made after this date may be reflected in the population counts made available for redistricting or the allocation of federal funds. Demographic Full Count Review: For the 2000 Census, analysts were hired under contract by the Bureau to identify, investigate, and document suspected data discrepancies in order to clear census data files and products for subsequent processing or public release. Bureau reviewers were to determine whether and how to correct the data by weighing quality improvements against time and budget constraints. Bureau officials told us that they expect to implement something similar to the 2000 program, but they have not made a final decision for 2010. Count Question Resolution (CQR): In addition, for the 2000 Census the Bureau implemented the CQR program to provide a mechanism for state, local, and tribal governments, as well as Bureau personnel, to correct the counts of housing units and other types of dwellings and their associated populations. Governmental entities could use the updated information when applying for federal assistance that uses census data as part of the allocation formula. Between the program’s initiation in June 2001 and its completion in September 2003, the CQR program corrected data affecting over 1,180 of the nation’s more than 39,000 governmental units. Although the national- and state-level revisions were relatively small, in some cases the corrections at the local level were substantial. For example, the Bureau added almost 1,500 persons to the population count of Cameron, Missouri, when CQR found that a prison’s population was erroneously omitted. Bureau officials told us that they expect to implement something similar to the 2000 program, but they have not made a final decision for 2010. Census Challenge Program: Further, to permit challenges to population estimates prepared by the Bureau, the Bureau administers a program whereby governmental units—including states, counties, and tribal and local governments—may file informal challenges within a designated period of time after the estimate is released by the Bureau. In the event that the challenge cannot be resolved informally, the governmental unit may proceed with a formal challenge where the state or local government unit has a right to a hearing. Using such documentation as new construction permits, and data from water and electrical utilities, localities can ask the Bureau to review and update their population counts. Between 2001 and 2007, 259 challenges led to adjustments in census population estimates. Coverage Measurement: Beginning with the 1980 Census, the Bureau has had procedures in place to measure the accuracy of the census (or “coverage”) by relying on additional information obtained from an independent sample survey of the population. However, due to concerns over the quality of the data and other factors, the Bureau has never used the results of its coverage measurement efforts to adjust the census population count. For the 2010 Census, the Bureau plans to measure coverage error for various demographic groups and geographic areas, but does not plan to use the results to adjust the final population counts. Although accurate population counts and estimates play an important role in allocating federal assistance, various other factors related to the design of federal grant programs may mitigate or increase the effect that population changes can have on the distribution of federal funds. These factors include floors on matching rates, floors for small states, hold- harmless provisions, complex formula structures, lags in data, and whether funding for a specific program is from a fixed pool or open ended. I will describe each in greater detail. Floors on Matching Rates: Some grant programs employ floors in order to mitigate the outcome that would result if a particular grant allocation were determined by the funding formula alone. For example, the Medicaid statute provides for a 50 percent floor. In our 2003 report on federal formula grant funding, we found that for certain states the Medicaid matching provisions mitigated the effect of the Medicaid funding formula, which has a population component. In 2002, under the statutory formula, which is based on personal income relative to state’s population, Connecticut—a state with a high per-capita income—would have received a 15 percent federal matching rate. Because of the statutory floor, Connecticut instead received a 50 percent federal match. Floors for Small States: To ensure at least a minimum level of funding for all states, program formula allocations with formulas that rely on population data can include floors for small states. The VR formula employs a floor allocation that overrides the population-based allocations. The least-populated states receive a higher allocation than they would have otherwise received under the formula. Hold-Harmless Provisions: In order to prevent funding losses from a formula change, programs can include hold-harmless provisions guaranteeing a level of funding that is based on a prior year’s funding. For example, one part of the CARE Act contains hold-harmless provisions whereby some recipients are guaranteed they will receive at least as much funding as in the previous year. Complex Formula Structures: Many formulas include measures other than population to distribute funds. VR allocations depend upon three factors: the state’s 1978 allocation, population, and per capita income. As a result, the effect of increases in population may be mitigated by their 1978 allocations and changes to the state’s per capita income. CDBG allocations are based on a complex dual formula structure using statistical factors reflecting several broad dimensions of need. Each metropolitan city and urban county is entitled to receive an amount equaling the greater of the amounts calculated under two formulas. The factors involved in the first formula are population, extent of poverty, and extent of overcrowded housing, weighted 0.25, 0.50, and 0.25, respectively. The factors involved in the second formula are population growth lag, poverty, and age of housing, weighted 0.20, 0.30, and 0.50, respectively. In these formulas, the inclusion of population moderates the targeting impact of the other formula factors. We previously reported that complex approaches such as this can result in widely different payments to communities with similar needs. Lags in Data Used to Allocate Funds: Statutes that require formulas to use specific sources of data can introduce lags in the data being used when those data are not immediately available. Lags inherent in the collection and publication of data by statistical agencies that gather and process data can result in a formula relying on data that are several years old. For example, the Medicaid statute generally specifies that matching rates for Medicaid be calculated 1 year before the fiscal year in which they are effective, using a 3-year average of the most recently available per capita income data reported by the Department of Commerce. For fiscal year 2007, matching rates were calculated at the beginning of fiscal year 2006 using 3-year average data for 2002 through 2004—the latest then available. Where recipients have been affected by recent changes to their population, the recipient may view such allocations as slow to respond to these changes in population. Fixed Pool versus Open Ended Funding: Most programs have a finite or fixed pool of funds to distribute, while others do not. In a fixed pool program, such as SSBG, when a population change results in an increased allocation for one state, the increase is offset by decreases in allocations to one or more other states. In open-ended programs, such as Medicaid, states can receive more funding when states spend more from their own source of revenue, without a corresponding decrease to other states. In conclusion, while population data play an important role in allocating federal assistance through formula grant programs, other grant-specific features—several of which I have discussed today—can also play a role in funding allocations, and in some cases can mitigate or entirely mute the impact of a change in population. Importantly, not all grants work the same, and an increase or decrease in population size may not have a proportional impact on ultimate funding levels. Nevertheless, given the importance of census data as a baseline for postcensal estimates used for grant programs, as well as for congressional apportionment and redistricting, counting the nation’s population once, only once, and in the right location in 2010 is an essential first step. Mr. Chairman, this concludes my remarks. I will be glad to answer any questions that you or other Subcommittee members may have. For further information regarding this statement, please contact Robert Goldenkoff, Director, Strategic Issues, on (202) 512-2757 or at goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement included Ty Mitchell, Assistant Director; Sarah Cornetto; Erin Dexter; Robert Dinkelmeyer; Gregory Dybalski; Amber G. Edwards; Amanda Harris; and Tamara F. Stenzel. Ryan White CARE Act: Estimated Effect of Proposed Stop-Loss Provision on Urban Areas. GAO-09-472R. Washington, D.C.: March 6, 2009. 2010 Census: Population Measures Are Important for Federal Allocations. GAO-08-230T. Washington, D.C.: October 27, 2007. Ryan White CARE Act: Impact of Legislative Funding Proposal on Urban Areas. GAO-08-137R. Washington, D.C.: October 5, 2007. Vocational Rehabilitation: Improved Information and Practices May Enhance State Agency Earnings Outcomes for SSA Beneficiaries. GAO-07-521. Washington, D.C.: May 23, 2007. Medicaid: Strategies to Help States Address Increased Expenditures during Economic Downturns. GAO-07-97. Washington, D.C.: October 18, 2006. Community Development Block Grants: Program Offers Recipients Flexibility but Oversight Can Be Improved. GAO-06-732. Washington, D.C.: July 28, 2006. Community Development Block Grant Formula: Options for Improving the Targeting of Funds. GAO-06-904T. Washington, D.C.: June 27, 2006. Federal Assistance: Illustrative Simulations of Using Statistical Population Estimates for Reallocating Certain Federal Funding. GAO-06-567. Washington, D.C.: June 22, 2006. Data Quality: Improvements to Count Correction Efforts Could Produce More Accurate Census Data. GAO-05-463. Washington, D.C.: June 20, 2005. Grants Management: Additional Actions Needed to Streamline and Simply Processes. GAO-05-335. Washington, D.C.: April 18, 2005. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Formula Grants: 2000 Census Redistributes Federal Funding Among States. GAO-03-178. Washington, D.C.: February 24, 2003. 2000 Census: Coverage Measurement Programs’ Results, Costs, and Lessons Learned. GAO-03-287. Washington, D.C.: January 29, 2003. Formula Grants: Effects of Adjusted Population Counts on Federal Funding to States. GAO/HEHS-99-69. Washington, D.C.: February 26, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In past years, the federal government has annually distributed over $300 billion in federal assistance through grant programs using formulas driven in part by census population data. Of the more than $580 billion in additional federal spending, the American Recovery and Reinvestment Act of 2009 will obligate an estimated additional $161 billion to federal grant programs for fiscal year 2009. The U.S. Census Bureau (Bureau) puts forth tremendous effort to conduct an accurate count of the nation's population, yet some error in the form of persons missed or counted more than once is inevitable. Because many federal grant programs rely to some degree on population measures, shifts in population, inaccuracies in census counts, and methodological problems with population estimates can all affect the allocation of funds. This testimony discusses (1) how census data are used in the allocation of federal formula grant funds and (2) how the structure of the formulas and other factors can affect those allocations. This is based primarily on GAO's issued work on various formula grant programs and the allocation of federal funds. Federal grants use various sources of population counts in their funding formulas. They include the decennial census, which provides population counts once every10 years, and also serves as the baseline for estimates of the population for the years between censuses--known as postcensal estimates. Other sources of population data include the Bureau's American Community Survey and the Current Population Survey conducted by the Bureau for the Bureau of Labor Statistics, which provides monthly data. The degree of reliance on population in funding formulas varies. For example, the Social Services Block Grant formula allocates funding based solely on a state's population relative to the total U.S. population. Other programs use population plus one or more variables to determine funding levels. Medicaid, for example, uses population counts and income to determine its federal reimbursement rate. On the basis of simulations GAO conducted of federal grant allocations by selected federal grant programs--for illustrative purposes only--we found that changes in population counts can affect, albeit modestly, the allocations of federal funds across the states. For example, in 2006 we found that compared to the $159.7 billion total federal Medicaid funding in 2004, 22 states would have shared an additional $208.5 million in Medicaid funding, 17 states would have lost a total of $368 million, and 11 states and the District of Columbia would have had their funding unchanged. In total 0.2 percent of Medicaid funds would have shifted as a result of the simulation. In addition to population data, various other factors related to the design of federal grant programs may mitigate the effect that population changes can have on the distribution of federal funds. For example, in order to prevent funding losses from a formula change, several programs include hold-harmless provisions guaranteeing that each recipient entity will receive a specified proportion of the prior year's amount or share regardless of population changes.
OSC’s primary role is to safeguard the merit system in federal employment by protecting federal employees, former federal employees, and applicants for federal employment from prohibited personnel practices as shown: OSC receives and independently investigates allegations of prohibited personnel practices. Before completing its investigation and making its determination about whether there are reasonable grounds for believing a violation has occurred, OSC may refer the case to its alternative dispute resolution (ADR) program. If OSC cannot process a case within the 240- day limit, the agency must get permission from the complainant to keep the case open. If OSC finds reasonable grounds for believing that a violation occurred, OSC can seek corrective action, disciplinary action, or both through negotiation with the agency involved. If an agreement cannot be reached, OSC can file a petition (for corrective action) or a complaint (seeking disciplinary action) with the Merit Systems Protection Board (MSPB). OSC functions as a case prosecutor before the MSPB. OSC also receives whistleblower disclosure claims. As described by law, these consist of violation of law, rule, or regulation; gross mismanagement, gross waste of funds, abuse of authority, or a substantial and specific danger to public health or safety. Unless disclosure of a whistleblower’s identity is necessary because of imminent danger to public health or safety or imminent violation of criminal law, OSC maintains the whistleblower’s anonymity. (See app. I for more information on OSC’s privacy policies.) Unlike its actions with respect to prohibited personnel practices, OSC does not independently investigate whistleblower disclosure cases. Instead, OSC is required to determine within 15 days whether there is a substantial likelihood the allegations constitute wrongdoing. If so, OSC sends the information to the head of the agency where the individual making the allegations works. The agency head must conduct an investigation and submit a written report to OSC. OSC is responsible for reviewing the agency’s report to determine whether the findings appear to be reasonable and whether the report contains all of the information required by statute. Unless the agency has found evidence of a criminal violation, OSC provides the whistleblower with a copy of the agency report for comment. OSC also investigates and prosecutes complaints about possible violations of the Hatch Act, which regulates the political activities of federal employees, District of Columbia employees, and certain state and local government employees employed in connection with programs financed by federal funds. OSC may seek corrective or disciplinary action in Hatch Act cases before the MSPB. OSC also issues advisory opinions to persons seeking advice about how certain kinds of political activity are treated under the Hatch Act. In addition, OSC investigates and prosecutes complaints under the Uniformed Services Employment and Reemployment Rights Act (USERRA) of 1994, which covers the employment rights of individuals serving in the uniformed military services. During each fiscal year, OSC receives new cases that are added to open cases from previous years, which equal the “total caseload inventory”. During the year, as OSC processes cases, the total caseload inventory decreases. At the end of the fiscal year, the agency has an “ending inventory”, which consists of pending cases (those that have not reached the applicable statutory limit for case processing) plus backlogged cases (those for which the applicable statutory limit for case processing has passed without the applicable determination being made). The ending inventory for one fiscal year becomes the beginning inventory for the following year. OSC’s authority comes from five federal statutes: the Civil Service Reform Act of 1978, the Whistleblower Protection Act of 1989, the Hatch Act,Office of Special Counsel Reauthorization Act of 1994, and the Uniformed Services Employment and Reemployment Rights Act of 1994,as described in table 1. OSC maintains its headquarters in Washington, D.C., and has field offices in Dallas, Texas, and Oakland, California. A Special Counsel appointed by the President and confirmed by the Senate for a five-year term heads the agency. OSC is organized into five operating divisions and two administrative support branches. Appendix III discusses OSC’s organizational structure for its operating divisions and explains how those divisions process cases. Seventy-five percent of OSC’s new cases from fiscal year 1997 through 2003 consisted of prohibited personnel practices cases. Whistleblower disclosure cases represented approximately 18 percent of total new cases. The remaining approximately 6 percent were Hatch Act and USERRA cases. As shown in figure 2, the number of new prohibited personnel practices cases ranged from a low of 1,301 in fiscal year 2001 to a high of 1,969 in fiscal year 2000—a fluctuation of 51 percent. In fiscal year 2001, the number of new cases decreased sharply from the previous year. Excluding fiscal year 2001, however, the number of new prohibited personnel practices cases ranged from a low of 1,558 in fiscal year 2002 to a high of 1,969 in fiscal year 2000 – a fluctuation of 26 percent. Whistleblower disclosure cases steadily increased from fiscal year 1997 through 2000 (by about 37 percent, from 311 to 427) then dropped somewhat (about 11 percent) in fiscal year 2001, only to increase dramatically in fiscal year 2002 (about 46 percent) and decrease slightly in fiscal year 2003 (about 4 percent). OSC officials could not tell us with any certainty the reasons for the year-to- year fluctuations in prohibited personnel practices cases. They said increases may have resulted from outreach efforts by OSC to educate federal employees and others about their rights and the agency’s roles. As for the steep decline in prohibited personnel practices cases in fiscal year 2001, officials said it may have resulted from increased awareness by federal managers about OSC’s role and responsibilities, which may have led managers to consult more with personnel specialists to avoid actions that could have lead to a prohibited personnel practice. OSC officials expressed more certainty about the reasons for the fluctuation in whistleblower disclosure cases. Officials said the steady increase in such cases from fiscal year 1997 through 2000 may have been due to media coverage of several cases that brought attention to the agency and its role in handling these cases. OSC officials stated that the big jump in whistleblower disclosure cases in fiscal year 2002 was prompted, in part, by the terrorist events of September 11, 2001, after which the agency received more cases involving allegations of substantial and specific dangers to public health and safety and national security. OSC met the 240-day case processing statutory limit for about 77 percent of prohibited personnel practices cases from fiscal years 1997 through 2003 and met the 15-day statutory limit for whistleblower disclosure cases about 26 percent of the time. For prohibited personnel practices cases, the inability to process all new cases received each year in a timely manner meant that the annual number of backlogged cases was never below 29 percent and was as high as 44 percent. The percentage of whistleblower cases in backlog was always extremely high—95 to 97 percent. From fiscal year 1997 through 2003, OSC processed about 77 percent of prohibited personnel practices cases within the 240-day statutory limit. As shown in table 2, the number of cases processed within the limit decreased steadily from fiscal year 1998 through 2001 before increasing substantially in fiscal years 2002 and 2003. The total number of cases processed declined from fiscal year 1997 through 2000, and increased slightly each ensuing year. Nevertheless, in fiscal years 2002 and 2003, OSC’s improvement in both areas was still not as good as it had been in its best years—fiscal years 1998 and 1997, respectively. OSC did dramatically reduce the average processing time of a case in fiscal year 2002 (to 187 days) and fiscal year 2003 (to 135 days)—the best of the seven years. OSC’s efforts to process whistleblower disclosures cases were not as timely. From fiscal year 1997 through 2003, OSC met the 15-day statutory time limit for processing whistleblower disclosure cases 26 percent of the time. OSC’s average time to process a whistleblower disclosure case was more than six months for each of the seven years we examined. Of the 2,433 whistleblower disclosure cases that OSC processed during fiscal years 1997 through 2003, OSC found a “substantial likelihood” of wrongdoing for 86, or approximately 4 percent, and referred these cases to the head of the agency involved for investigation. In the remaining 96 percent of cases which were not referred to the agency head, OSC took no further action for reasons such as (1) lack of jurisdiction,(2) lack of sufficient information to make a substantial likelihood determination, (3) the complainant’s agency inspector general had already investigated the disclosure, or (4) the disclosure was minor, withdrawn, or duplicative. In addition, in cases where OSC concludes there is not a substantial likelihood of wrongdoing, with the consent of the whistleblower, OSC may transmit the information to the head of the agency where the complainant works. Congress has not established statutory limits for processing Hatch Act enforcement or USERRA cases. In fiscal year 2003, average processing time was 469 days for Hatch Act enforcement cases and 193 days for USERRA cases. OSC has established a standard of 30 days for processing Hatch Act advisory opinions. In fiscal year 2003, average time spent processing opinions was 106 days. As shown in table 4, the percentage of prohibited personnel practices cases in backlog fluctuated between 29 and 44 percent during the seven-year period. In fiscal year 1999, the number of cases processed declined by nearly 13 percent from the previous year. This decline, coupled with a nearly identical increase (nearly 14 percent) in cases received in fiscal year 2000, resulted in the large backlog of cases by the end of that year. The number of cases received in fiscal year 2001 dropped substantially from the previous year. While the number of cases processed in fiscal year 2001 was slightly higher than the previous year, the number of cases in backlog by the end of fiscal year 2001 increased from the previous year. The lower beginning inventory in fiscal year 2002, combined with a modest increase in new cases and processed cases, resulted in an ending inventory that was about 33 percent less than the ending inventory in fiscal year 1997. However, an increase in the number of new cases in fiscal year 2003 and a smaller increase in cases processed that year, led to an increase in ending inventory, but a decrease in the part of that inventory consisting of backlogged cases. Table 5 shows that over the seven-year period, whistleblower disclosure cases in backlog averaged 96 percent. Total caseload increased by about 20 percent from fiscal year 1997 through 2001, then jumped an additional 34 percent in fiscal year 2002 and about 30 percent in fiscal year 2003. This resulted in an ending inventory in fiscal year 2003 that was about 24 percent higher than the previous year and about 188 percent higher than in fiscal year 1997, as well as an increase in backlogged cases that was about 23 percent higher than in fiscal year 2002 and 192 percent higher than in fiscal year 1997. Although OSC cannot control the number of new cases filed, actions the agency has taken to handle its caseload more efficiently have yielded some benefits, according to agency data. These data show that the merger of the agency’s investigators and attorneys into three parallel investigative and prosecutive units and the adoption of streamlined investigative procedures increased productivity, while the adoption of a priority system for processing prohibited personnel practices and whistleblower disclosure cases allowed more important cases to be handled more expeditiously. Agency data also show that referral of a small number of prohibited personnel practices cases for ADR prior to determining a “reasonable grounds” has reduced the backlog. In June 2001, OSC merged its investigators and attorneys, who had been in two separate divisions, into three parallel divisions in an attempt to (1) foster a closer and more effective coordination of strategy between attorneys and investigators, (2) produce a more efficient case-handling procedure geared to helping the agency process cases within existing statutory time limits, and (3) target resources so that the most important cases could receive more in-depth and prompt attention. Shortly after the reorganization, OSC also adopted many of the streamlined investigative procedures that it had pilot tested. These included conducting more interviews by telephone for cases involving the least serious personnel actions and using a more flexible written format for documenting the findings of the investigation. Standard procedures require that the findings of an investigation be recorded in a detailed written report. However, on a case-by-case basis, a less formal report of investigation can be used. For example, the investigator may, with supervisory approval, opt to eliminate a detailed report of investigation if the evidence is so clear that it is not necessary. OSC officials stated that the reorganization increased productivity, thereby allowing cases to be processed faster. According to OSC officials, during the 3-1/2 years prior to the reorganization, the staffs of the separate divisions processed an average of 5.3 cases annually per person. During the first year after the merger of the divisions, productivity increased to 7 cases processed per person. In November 2001, OSC issued a policy directive that adopted a priority case processing system that classifies all its prohibited personnel practices cases into one of three categories. Under this approach, cases are investigated and analyzed based on the category to which they have been assigned. Category 1 prohibited personnel practices cases consist of the most serious personnel actions, involving employees who are threatened with removals, suspensions for more than 14 days, geographic reassignments, and reductions in grade. Category 2 cases are less severe, including cases where suspensions are 14 days or fewer, performance appraisal ratings are below “fully successful,” and denials of within-pay grade increases are being challenged. Category 3 cases involve the least serious adverse personnel actions, such as lower performance ratings that are still “fully successful,” non-geographical reassignments or details, failure to promote, and reprimands. For category 3 cases, investigators may use streamlined procedures that may require less time and staff resources to complete. In addition, within each of these categories, prohibited personnel practices cases may be designated “priority”—meaning they will receive the most prompt attention—based on the following factors: the urgency of the need for redress, the strength of the evidence supporting a violation, or the public interest in prompt resolution of the case. For instance, priority cases within category 1 must either (1) meet OSC’s criteria for seeking a stay of the personnel action or already have a stay in effect, or (2) be a case in which OSC believes, on the basis of the evidence, that there is a substantial likelihood that the complaint is meritorious and in which OSC will seek corrective or disciplinary action. Within each category, whistleblower reprisal complaints are given top priority. Since the implementation of this system for prohibited personnel practices, OSC has been able to process priority cases faster than the non-priority cases. OSC’s statistics show that from January 1, 2002, through September 30, 2002, OSC processed category 1 priority cases 17 percent faster than non-priority cases, category 2 priority cases 30 percent faster, and category 3 priority cases 50 percent faster. From October 1, 2002, through July 31, 2003, similar results were reported for category 1 and 3 cases. The number of category 2 priority cases during this period was too small for a reliable comparison with non-priority cases. OSC also has procedures to prioritize whistleblower disclosures. Although these procedures have been in use since 2002, an agency official indicated that they have not been adopted through a policy directive. Under the priority system, all cases are reviewed by the unit head, who makes an initial assessment of whether the cases are likely to meet the “substantial likelihood” determination and places the cases in one of three priority categories. Priority 1 cases are those that would likely meet the determination and would be referred to the agency head for investigation. Priority 1 cases are further categorized into two subcategories. Subcategory A cases deal with disclosures of substantial and specific dangers to public health and safety, while subcategory B cases are those dealing with non health and safety disclosures. Priority 2 cases are those where further review may be appropriate and would be referred to the agency’s Office of the Inspector General. These cases do not include health and safety allegations. Priority 3 cases are those that are not likely to meet the substantial likelihood determination and, therefore, likely to be closed. OSC further categorizes these disclosures into subcategory A for those that involve disclosures of public health and safety and subcategory B for all other disclosures. According to OSC, in fiscal year 2002, on average, priority 1A cases—dealing with health and safety disclosures—were referred to agency heads 1.2 times faster than priority 1B cases, which deal with other violations. In fiscal year 2003, on average, OSC stated that priority 1A disclosures were referred to agency heads 2.4 times faster than priority 1B disclosures. Since 2000, OSC has been using an ADR program for a small number of prohibited personnel practices cases, which has reduced the average processing time for these cases and reduced case backlog slightly. Through ADR, OSC reported that it resolved 13 cases in fiscal year 2002 and 15 cases in fiscal year 2003. The number of backlogged cases in table 4 reflects the resolution of these cases through ADR. Similarly, OSC reported that cases that went through ADR took 115 days in fiscal year 2002 and 122 days in fiscal year 2003. The data from table 2 on the processing time for prohibited personnel practices cases reflect that a small percentage of these cases was resolved through ADR. OSC’s ADR program is further discussed in appendix III. OSC told us that the primary reason it was not able to process cases more quickly was inadequate resources. Our analysis shows that additional staff alone may not solve the case processing problems. For example, the agency processed about the same number of cases in fiscal year 2003 that it had in fiscal year 1999 (2133 vs. 2109), despite having 16 percent more attorneys and investigators. OSC noted several mitigating factors, including staff turnover and the need to train new staff, which limited its ability to process more cases and reduce the backlog of cases. While OSC notes that it is difficult to meet the 15-day limit for whistleblower disclosure cases, the agency has not proposed an alternative time limit that officials believe is more realistic. Moreover, while the agency’s priority system appears to help handle high priority cases faster, delays in processing whistleblower disclosure cases are still pervasive. OSC has not detailed in any of its documents created for Congress or the executive branch a comprehensive strategy for processing more cases within statutory time limits and reducing the backlog of cases. OSC officials told us that the primary reason that the agency has not been more successful in meeting the statutory time limit for its cases, particularly those involving whistleblower disclosure, is lack of an adequate number of staff. A preliminary OSC analysis of prohibited personnel practices caseload since 1984 showed that through 1989, the agency’s caseload, staffing levels, and budget were fairly stable. From fiscal year 1991 to 1993, the number of new cases increased nearly 36 percent. Still, OSC data show that at the beginning of fiscal year 1993, no prohibited personnel practices case was more than 6 months old—well within the 240-day limit. The number of new cases declined slightly between fiscal years 1994 and 1996. During this period (1991-1996) the number of full-time equivalent staff dropped from 90 to 86. These data, however, cover a period before the seven-year period that we examined. Since 1994, OSC has had backlogged prohibited personnel practices cases, and the number of such cases has fluctuated. OSC officials said budget for staff has not increased fast enough to allow the agency to consistently meet the statutory time limits, especially for whistleblower disclosure cases. During fiscal years 2000 and 2001, Congress authorized OSC to hire 15 additional staff, which brought its full- time equivalent staff to 106. To decrease case processing times, most of the new staff were added to the divisions responsible for investigating and prosecuting cases. It does not appear that these staff and productivity increases after the merger of attorneys and investigators into one division translated into the processing of a significantly larger number of prohibited personnel practices and whistleblower disclosure cases. The total cases processed in fiscal year 1997 was 2,385. The figure dropped in each of fiscal years 1998, 1999, 2000, and 2001 before increasing slightly in fiscal year 2002 and moderately in fiscal year 2003. The number of whistleblower disclosure cases processed dropped in both fiscal year 2001 (by about 13 percent) and the following year (by about 16 percent), before rebounding with a 40 percent increase in fiscal year 2003. But the number of such cases processed that year was still slightly less than in fiscal year 1999 (401 cases vs. 414). The number of new prohibited personnel practices cases processed declined from fiscal year 1997 through fiscal year 2000, with small increases in each of the next three years. But the number of cases processed in fiscal year 2003 was about the same (1732 vs. 1695) as that in fiscal year 1999. OSC officials offered several mitigating factors that they say limited the agency’s ability to process more cases despite increases in staff. First, agency data showed a high turnover in staff between 2000 and 2003, which deprived the agency of institutional knowledge at a time when officials were trying to train the new staff. During this period, OSC hired 47 new staff and had 37 departures—all of whom were investigators or attorneys involved in processing cases. Second, officials indicated that new staff need training and time to develop experience before they can become full contributors to case processing efforts. The ability of the staff to develop requisite experience through on-the-job training was hindered by the high turnover. Another contributing factor cited by OSC officials leading to the agency’s difficulty in processing prohibited personnel practices cases within statutory time limits are the substantive and procedural processing requirements imposed by the Whistleblower Protection Act of 1989 and the OSC Reauthorization Act of 1994. The OSC reauthorization law added to the time it could take the agency to process certain prohibited personnel practices cases. Before the law was enacted, if OSC decided there were no reasonable grounds for believing such a violation occurred, the agency could immediately issue a final letter notifying a complainant of the termination of the investigation. The reauthorization law, however, requires OSC to send the complainant a status report of its proposed fact findings and legal conclusions supporting this decision and give the complainant 10 days to submit comments before OSC’s decision becomes final. The 1989 whistleblower law and the 1994 reauthorization law require more information in the final letters OSC sends to complainants notifying them of the termination of the prohibited personnel practice investigation. Prior to the 1989 law, OSC was simply required to notify the complainant of the termination “and the reasons therefore”. The 1989 law required additional information in the notification letter, namely a summary of the relevant facts, including the facts that support, and do not support, the complainant’s allegations. In addition, the 1994 OSC reauthorization law required that the notification letter address any comments submitted by the complainant in response to OSC’s proposal to terminate the investigation. While the expanded letters were intended to be more “customer friendly”, they are more time-consuming to write than the shorter letters that the agency sent to complainants before the 1989 law was enacted. According to OSC, in the fall of 1993, the agency implemented new procedures in response to congressional criticism about the breadth of the agency’s investigations and to be more consistent with the “spirit” of the Whistleblower Protection Act of 1989. These new procedures required broader investigations in all cases and created additional internal memoranda on each case. As a result, investigations take longer, resulting in the processing of fewer cases, and an increased backlog. Before these new procedures were implemented, OSC officials said, the agency achieved higher productivity by limiting the duration of the investigation and the subsequent explanation of the reasons for closure in cases where the agency found no merit. The changes prompted by these statutes and congressional interest, however, were implemented before the period that we examined. Thus, it is worth noting that for both whistleblower disclosure and prohibited personnel practices cases, if OSC had been able to process as many cases during each of the fiscal years from 1997 through 2003 as it did in its best year for each type of case (1997 for prohibited personnel practices cases, 1999 for whistleblower disclosure cases), the backlog for both would have been significantly lower by the end of fiscal 2003 than it was. Other than the increase in, and complexity of, whistleblower disclosure cases after September 11, 2001, OSC did not inform us of any significant events during the years for which we obtained data that would have prevented the agency from achieving this goal. According to OSC, in 1978, during congressional consideration of the Civil Service Reform Act of 1978, senators drafting the legislation envisioned that 15 to 20 full-time staff would be needed to process whistleblower disclosure cases within 15 days. OSC has never assigned more than five staff to whistleblower disclosure cases and typically has assigned only two. Given that OSC’s total full-time staff has never exceeded more than 106 and given the greater volume of prohibited personnel practices cases, assigning several more staff to whistleblower disclosure cases would require the agency to shift staff from one kind of case to the other. OSC officials said they have not shifted staff from what they consider their primary mission— prohibited personnel practices cases—to whistleblower disclosure cases because this would decrease timeliness for prohibited personnel practices cases. OSC officials pointed out that no special counsel has believed that OSC could meet the 15-day case processing time limit for whistleblower disclosures and that cases have generally become more complex in recent years. OSC officials told us they were not aware of any proposal by the agency to have the 15-day limit increased. Another reason that OSC officials cited for not meeting the 15-day time limit is that it can be difficult to contact whistleblowers to discuss their allegations and receive relevant documentation from them and from agency officials within this short time frame. In one case that was referred by a congressional committee, for example, it became increasingly difficult to conduct business during the day because the whistleblower preferred to receive calls at night and would use only a commercial fax machine on weekends to send supporting documentation. In this case, delays in submitting the evidence contributed to OSC missing the 15-day limit. More broadly, if OSC has a strategy for deploying any or all of the additional staff it received in 2000 and 2001 on whistleblower disclosure cases and a way of evaluating whether that strategy is working, the agency has not shared it with us. Having such a strategy could be important for ensuring that the most serious whistleblower disclosure cases receive not just the most prompt—but also the most comprehensive—attention. If the OSC 2000 data are any guide to the future, the majority of cases (96 percent from fiscal years 1997-2003) will continue to be those that do not meet the “substantial likelihood” standard. If a quick determination can be made in a large number of these cases that the standard is not met, it may be that OSC should deploy more staff to this “weeding out” process. Alternatively, if it is not immediately clear in a significant number of these cases whether the “substantial likelihood” standard can be met, then OSC might consider devoting more staff to reviewing these cases. Although OSC’s description of the cases priority system indicates that they can track pending and closed whistleblower disclosure cases and use these data to review workload distribution and cases processing efficiency, officials have not indicated that they have been able to use these data to reduce the time spent processing cases. In the last several years, OSC has issued a number of mandated reports and correspondence to the Congress and the Office of Management and Budget that discuss OSC’s case processing difficulties and resulting backlog. But OSC has not specified proposed long-term solutions for them. In its Annual Report to Congress, OSC is required to report information on prohibited personnel practices cases that are backlogged. In the annual reports that we reviewed, the agency did so and acknowledged that the backlog was a problem. For example, in its fiscal year 1999 report to Congress, OSC disclosed that a significant backlog of cases was pending at the agency that resulted in delays in resolving complaints. The fiscal year 2002 report noted that reducing the backlog is important because (1) Congress imposed the 240-day limit for prohibited personnel practices cases in response to widespread criticism concerning long delays in the processing of complaints by OSC, and (2) a large backlog can prevent OSC staff from quickly investigating and resolving more critical cases. But none of the annual reports has discussed specific actions that OSC expects to take and how these actions would affect its ability to process cases—either within or outside statutory time limits—and reduce the backlog or the actual effects of actions the agency took in previous years. OSC requested seven additional staff in its fiscal year 2004 budget request to the Congress, but did not discuss how the additional staff would be deployed or the extent to which they would help the agency process cases more quickly and reduce the backlog. Nor did OSC provide the Congress with an estimate of the number of staff required to solve these problems and/or an analysis of other changes necessary to do so. In September 2003, the Chairman of the Senate Finance Committee wrote to OSC expressing concerns about the backlog of whistleblower disclosure cases and requesting information on how OSC planned to address the backlog. In an October 2003 response, the Acting Special Counsel said OSC has long struggled with how best to address the backlog of whistleblower disclosure cases. He cited past efforts, including assigning such cases to attorneys from units that would not normally handle whistleblower disclosure cases, which led to the resolution of some cases, but was not as successful as the agency hoped. He said he had assigned two more investigators to assist with the whistleblower disclosure caseload. But the Acting Special Counsel did not specifically discuss what effect the additional investigators would have on the agency’s ability to meet the 15- day limit or to reduce the number of backlogged whistleblower disclosure cases. Strategic workforce planning—planning that focuses on developing long- term strategies for acquiring, developing, and retaining an organization's people aligned with human capital approaches that are clearly linked to achieving programmatic goals—is an essential element of a modern, effective human capital management system. OSC’s fiscal year 2004 annual performance plan includes workforce strategy as one of six strategic goals and discusses strategies that the agency plans to take “to maintain a highly skilled, well-trained, customer-oriented workforce, and to deploy it most effectively to carry out the agency’s mission.” This plan discusses changes in staff at the beginning of fiscal year 2001 due to attrition and the addition of new staff, and the fact that experienced staff has been diverted from their usual duties to mentor and help train new staff. The agency has also identified critical skills and developed data on employee attrition and retirement. However, the agency’s planning to date lacks in long-term solutions directly associated with improving case processing and reducing case backlog. The agency has not identified (1) critical skills needed to meet current and emerging goals, (2) gaps in identified skills, (3) strategies to meet gaps, (4) an action plan to implement strategy, and (5) a plan to evaluate the results. OSC’s challenge in meeting its case processing time standards continues despite actions taken by the Congress and the agency. The delays in processing whistleblower disclosure cases are pervasive: Of the total inventory of cases at the end of fiscal year 2003, 97 percent had not been processed in the 15-day statutory time frame. OSC’s actions, including realigning its staff and developing a case priority system, have not significantly reduced the case backlog. Presenting a strategy to Congress that demonstrates how additional staffing, organizational changes, or legislative solutions would help reduce the backlog of prohibited personnel practices and whistleblower disclosure cases would provide Congress with information that it needs for oversight and resource allocation. We recommend that the Special Counsel provide Congress with a detailed strategy designed to allow more consistent processing of cases within statutory time limits and a reduction in the backlog of cases for which these limits have already passed. On January 21, 2004, we provided a draft of this report to OSC for review and comment. We met with the Special Counsel and the Associate Special Counsel for Legal Counsel and Policy to discuss the draft report and also received written comments from the agency. OSC generally agreed with the contents of the report noting that our review “has addressed a critical, long-standing issue of importance not just to this agency, but to individuals who seek its assistance, other government agencies, Congress, and the public.” The agency agreed with our recommendation to provide Congress with a detailed strategy designed to allow more consistent processing of cases within statutory time limits and a reduction in the backlog of cases. Noting that its annual report to Congress for fiscal year 2003 has been substantially completed, OSC stated that it plans to report to Congress on its future strategy as expeditiously as possible this year. We believe this is reasonable. OSC’s written response is included in appendix IV. In addition, OSC provided technical comments and clarifications, which we have incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time we will send copies to OSC and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-9490 or Belva Martin, Assistant Director, on (202) 512-4285. Key contributors to this engagement are listed in appendix V. Given the nature of OSC’s enforcement mission, its complaint and litigation files often contain personal or sensitive information, including information from or about complaint filers, and other information made or received by OSC during its investigative and prosecuting activities. OSC’s basic privacy protection policies are derived from the Privacy Act of 1974, 5 U.S.C., section 552a. In addition to the Privacy Act, the Whistleblower Protection Act requires that the identity of any employee, former employee, or applicant for employment who makes a whistleblower disclosure may not be disclosed by OSC without such individual’s consent unless OSC determines that the disclosure of the individual’s identity is necessary because of an imminent danger to public health or safety or imminent violation of any criminal law. Generally, unless permitted under one of the Privacy Act’s exceptions, OSC cannot disclose any of its records by any means to any person, or to another agency, except pursuant to a written request by, or with the prior written consent of, the individual to whom the record pertains. Exceptions to the Privacy Act permit certain disclosures without such a request or consent. For example, OSC may disclose information within the agency when other OSC employees need the information to do their jobs, to federal law enforcement agencies for civil or criminal law enforcement purposes, or under OSC’s routine uses. Under the Privacy Act, OSC is permitted to disclose information from its files when doing so would be in accordance with a routine use of such information. As required by the Privacy Act, OSC has published a descriptive listing of its routine uses in the Federal Register, 66 Fed. Reg. 36611 and 51095 (2001). For example, OSC may provide information to the Equal Employment Opportunity Commission about allegations of discrimination and to the Merit Systems Protection Board when filing a petition for disciplinary action. In the event that OSC believes that disclosure may be appropriate in a circumstance when it has not received a written request or consent from a complainant or whistleblower, and routine use or other Privacy Act condition of disclosure does not apply, OSC will seek written authorization from the complainant or whistleblower. Regardless of the permissibility of disclosure under the Privacy Act, OSC is specifically prohibited by law from responding to inquiries concerning work performance, ability, aptitude, general qualifications, character, loyalty or suitability for any personnel action of any complainant who filed a prohibited personnel practice allegation, unless (1) the complainant consents in advance, or (2) an agency requires the information in order to make a determination concerning the complainant’s access to highly- sensitive national security information. (5 U.S.C., § 1212(g)(2)). When a prohibited personnel practice complaint is filed, it is OSC’s policy not to reveal the identity of the complainant even to the involved agency unless OSC has the complainant’s consent. OSC requires each filer to select one of three consent statements that contains varying restrictions on OSC’s disclosure and use of complainant information. OSC has taken a number of steps within the agency to ensure that the rights and privacy of the complainants are adequately protected. The agency requires that all staff receive training about ensuring the confidentiality of OSC records and the privacy rights of those individuals bringing cases to it. In addition, the agency makes information about its policy statements and disclosure policies under the Whistleblower Protection Act available to each person alleging reprisal for whistleblowing. For example, in April 1995, OSC issued “Policy Statement Concerning the Disclosure of Information Regarding Prohibited Personnel Practice Complaints.” OSC enhanced this statement in September 1995, when it issued another policy statement on the “Disclosure and Use of Information from OSC Files,” which contains disclosure information relevant to all OSC case types and outlines the Privacy Act provisions under which OSC may disclose information about a case. Moreover, in September 2002, OSC issued two updates to the statements that expounded on the disclosure and use of information from OSC program files. These policy papers afford better understanding of how OSC uses and discloses the information that OSC acquires or creates while investigating and prosecuting cases. We examined efforts by OSC to manage its caseload. Our review provided information in the following areas (1) OSC’s caseload by type and number and changes to the caseload between fiscal years 1997 to 2003, (2) the extent to which cases were processed within time frames set by Congress, (3) actions taken by management to address workload issues, and (4) the agency’s perspective on the adequacy of its resources. In addition, we were asked to provide information on OSC’s data tracking system and its privacy protection policies. To determine the agency’s caseload by type, number, and changes over time, we reviewed OSC’s Annual Reports to Congress for fiscal years 1997 through 2002, the latest available, as well as information in OSC’s budget request, annual performance plans, and other reports to Congress. We examined the information across the various reports and compared them to agency-generated data to ensure that data for each year were consistently stated. We discussed with agency officials the reasons for the fluctuations that occurred in the caseload over time. For the reportable years under review, we received and reviewed data by case type. In our examination of the data, we identified discrepancies, primarily in the beginning and ending inventory of cases. To resolve these discrepancies, we met with OSC’s Chief Information Officer. He told us that the methodology used for querying the system had limitations. In particular, the data entry operator used ad hoc queries that were not reviewed and verified. To provide us with accurate data, he developed a software program that offered a more reliable and consistent approach to querying OSC’s database. We tested the accuracy and completeness of a sample of cases from OSC’s database. Based on the results of our tests of required elements, we determined that the data were sufficiently reliable for the purposes of our report. To determine the statutory time frames set by Congress, we reviewed statutory requirements for processing prohibited personnel practice and whistleblower disclosure cases. To determine the number of cases that were not meeting the prescribed timeframes, we obtained data on (1) the total number of cases processed, including subsets of the number of cases processed within and outside of the statutory timeframes, (2) the average time spent to process cases, (3) the beginning inventory levels, and (4) the number of cases in backlog. Based on this information, we computed and verified the number and percentage of cases in backlog for each year. Throughout the report, we refer to “processed cases” to denote that OSC has made the determination required within the statutory time limits for prohibited personnel practices cases or whistleblower disclosure cases. We met with agency officials to discuss the delays in processing the cases and to obtain their views on the agency’s ability to meet these time standards. To learn about the actions taken by management to address workload issues, we met with various managers and staff responsible for implementing several agency-wide initiatives. We reviewed documentation describing the progress the agency made toward accomplishing internal reforms, including a major restructuring of organizational units and streamlining case processing procedures. We examined productivity measures resulting from changes to case processing procedures. To obtain the agency’s perspective on the adequacy of its resources, we spoke with agency officials and reviewed documents on the benefits of obtaining additional staff and funding to help eliminate the backlog and process cases more timely. We discussed the agency’s view on the principal contributors to its inability to eliminate backlog cases. We then analyzed the information that we obtained to form conclusions about the extent to which the agency had made optimum use of its existing resources. Our findings about the need for an overall strategy on how the agency plans to reduce the backlog are based in part on our extensive work on strategic workforce planning. To learn about OSC’s case tracking system, OSC 2000, we reviewed documentation on the system’s internal security controls and security features for safeguarding complaint information. To determine the policies and procedures in place, and management oversight capabilities to ensure reliability and quality, we met with the Chief Information Officer and the System Administrator. We also received a hands-on demonstration of system requirements for entering data. We assessed the reliability of the data in OSC 2000 by reviewing electronic queries of required data elements, reviewing existing information about the data and the system that produced them, and interviewing agency officials knowledgeable about the data. Based on our assessment of required data elements, including the recently generated caseload data, we determined that the data were sufficiently reliable for the purposes of our report. To determine OSC’s policy on privacy protection for the types of cases that it handles, we met with agency officials and examined agency policy statements and disclosure procedures developed for the privacy and confidentiality of complainants. In fiscal year 2003, OSC had five operating divisions: the Complaints and Disclosure Analysis Division, three Investigation and Prosecution Divisions, and the Legal Counsel and Policy Division. The three Investigation and Prosecution Divisions resulted from the 2001 merging of the former Investigation Division and Prosecution Divisions. The Complaints and Disclosure Analysis Division includes OSC’s two principal intake units for new cases received by the agency—the Complaints Examining Unit and the Disclosure Unit—and employs a total of 24 staff. The Complaints Examining Unit is the intake point for all prohibited personnel practices and other violations of civil service law, rule, or regulation within the OSC’s jurisdiction. The attorneys and personnel management specialists in this unit conduct an initial review of complaints to determine whether they are within OSC’s jurisdiction and whether further investigation is warranted. They refer all matters with a potentially valid claim to the Investigation and Prosecution Divisions. The Disclosure Unit is responsible for reviewing information submitted by whistleblowers, and for advising the Special Counsel on the appropriate disposition of the case, including possible referral to the head of the relevant agency for investigation, referral to an agency Inspector General, or closure. Attorneys in this unit also analyze the reports of agency heads in response to the Special Counsel’s referral to determine whether the reports appear reasonable and meet statutory requirements before the Special Counsel transmits them to the President and appropriate congressional oversight committees. The Investigation and Prosecution Divisions consist of three divisions, including the Hatch Act Unit and the Alternative Dispute Resolution Unit. The three Investigation and Prosecution Divisions investigate complaints referred to them by the Complaints Examining Unit. Each division reviews pertinent records and interviews complainants and witnesses with knowledge of the matters alleged. Matters not resolved during the investigative phase undergo legal review and analysis to determine whether the matter warrants corrective action, disciplinary action or both. Attorneys from these units conduct litigation before the Merit Systems Protection Board. The units also represent the Special Counsel when OSC intervenes or otherwise participates in other proceedings before the Merit Systems Protection Board. The Hatch Act Unit, part of one of the Investigation and Prosecution Divisions, is responsible for the administration of Hatch Act restrictions on political activity by federal and certain state and local government employees. The unit issues advisory opinions to requesters seeking information about the application of the Act’s provisions to specific activities. It also receives and reviews complaints alleging Hatch Act violations, referring complaints to an Investigation and Prosecution Division, when warranted, for further investigation and possible prosecution before the Merit Systems Protection Board. In selected cases that have been referred for further investigation, the Alternative Dispute Resolution (ADR) unit, a part of another one of the Investigation and Prosecution Divisions, contacts the complainant and the employing agency to invite them to participate in OSC’s voluntary Mediation Program. If both parties agree, OSC conducts a mediation session, led by OSC mediators who have mediation training and experience in federal personnel law. When mediation resolves the complaint, the parties execute a written and binding settlement agreement. If mediation does not resolve the complaint, it is referred for further investigation, as it would have been had the parties not attempted mediation. The Legal Counsel and Policy Division serves as OSC’s office of general counsel, manages the agency’s Freedom of Information/Privacy Act, and ethics programs, and engages in policy planning, development, and implementation. The division is allotted five positions to carry out these functions. OSC also has two administrative support units. The Human and Administrative Resources Management Branch, composed of six employees, provides personnel, procurement, and other administrative services; and the Information Systems Branch, with seven employees, provides information technology, records management and mail services. In addition to the person named above, Karin Fangman, Sharon Hogan, Michael Rose, and Greg Wilmoth made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The U.S. Office of Special Counsel has not been consistently processing cases within statutory time limits, creating backlogs. Because the backlogs are of concern to the Congress, this report provides information on how many cases were processed within statutory time limits, the actions taken by OSC to address case processing delays and backlog, and the agency's perspective on the adequacy of its resources and our analysis of this perspective. The U.S. Office of Special Counsel (OSC) met the 240-day statutory time limit for processing prohibited personnel practice cases about 77 percent of the time from fiscal year 1997 through 2003 and met the 15-day limit for processing whistleblower disclosure cases about 26 percent of the time. OSC took an average of more than six months to process a whistleblower disclosure case. Over the seven-year period, 34 percent of the prohibited personnel practices cases were backlogged as were 96 percent of the whistleblower disclosure cases. In an attempt to address workload issues, in 2001 OSC streamlined processes and hired additional staff. OSC data indicate that the merger of the agency's investigators and attorneys into a single unit increased the average number of cases processed per individual from June 2001 to June 2002. A case priority processing system for prohibited personnel practices and whistleblower disclosure cases allowed OSC to process more important cases more expeditiously, according to OSC. OSC officials told us that the primary reason the agency has not been more successful in meeting the statutory time limits for its cases, particularly those involving whistleblower disclosure, is lack of an adequate number of staff. Our analysis of OSC data indicates, however, that even with increased staffing, the agency was not able to process a significantly larger number of cases within the time limits. OSC noted that staff turnover and the need to train new staff lowered its productivity. Officials also noted the difficulty in meeting the 15-day limit for processing whistleblower disclosure cases, but have not proposed an alternative time limit. In external documents to Congress, OSC has discussed its case processing and backlog difficulties, but has not developed a comprehensive strategy for dealing with them. Presenting such a strategy would provide Congress with information that it needs for oversight and resource allocation.
We provided copies of a draft of this report to USDA and the Department of Defense for their review and comment. We met with officials from USDA’s Agricultural Marketing Service, Economic Research Service, and Rural Business Service, including the Associate Deputy Administrator, branch chiefs, and others, to discuss their comments on the draft report. These officials had several concerns about our use of USDA’s published data for the mailbox and announced cooperative price series as surrogates for farm- and cooperative-level prices, and our use of commissary prices as a surrogate for wholesale prices. Specifically, the officials made the following comments: Using the mailbox price as the farm-level price understates the amount that farmers receive for a gallon of fluid milk by different amounts for various markets. This is because the mailbox price is the average price received by all farmers in the market for all uses of milk, and the actual prices received by farmers for milk used in fluid products would vary by market, depending upon the proportion of milk used in that market for products other than fluid milk, the costs and profits from manufacturing milk, and other factors that are unrelated to the fluid milk market. During our meeting, Economic Research Service officials suggested what they believe is a better measure of farm-level prices for fluid milk. However, these officials stated that while their suggested method might provide a better estimate of the farm-level price for fluid milk, it would still not provide a perfect estimate because it is impossible to isolate the price for fluid milk in a market where there is concurrent demand for milk used for other products. We were aware of the limitations of using mailbox prices as a surrogate for farm-level prices for fluid milk and therefore included a discussion of these limitations in appendix I of the draft report. However, we chose to use the mailbox price in our analysis because it is routinely published and is, in the dairy industry, a relatively well-recognized estimate of the average price received by farmers. We continue to believe that it provides an adequate proxy for determining the relative portions of the retail price received by the various entities in the farm-to-retail milk marketing chain and the associated trends in changes to these portions. Our views were generally shared by the dairy industry experts who we spoke with. However, to provide additional perspectives on fluid milk prices, we calculated the farm-level prices for 24 of the 31 markets for which the necessary data were available using the method suggested by the Economic Research Service officials. We found that by using this methodology, farmers, on average, received 47 percent of the retail price for a gallon of 2-percent milk instead of the 43 percent that we calculated using the mailbox price, and cooperatives received 6 percent of the retail price instead of 10 percent in the 24 markets from January 1996 through February 1998. Appendix VI contains additional data and analysis of fluid milk prices using the Economic Research Service’s methodology. USDA officials also told us that the manner in which we used the announced cooperative price series overstates the cooperatives’ share of the retail price. In part, this is because the announced cooperative price includes certain revenues that are not retained by the cooperative but are used to cover costs such as hauling. The officials acknowledged, however, that it is difficult to calculate the actual share of the retail milk price that is received by cooperatives. We have revised the report to indicate that a portion of the announced cooperative price may be used to pay for some costs, such as hauling, in some markets. Finally, according to the officials, commissary prices may not be a good surrogate for wholesale prices because (1) some of the commissaries are not in close proximity to the markets we analyzed, (2) the wholesale price data were derived from a single store in each market we analyzed, and (3) the commissary price is based on contracts awarded to the lowest-cost bidder and is the result of a rigid bidding process that may not reflect the wholesale prices paid by retailers in a given market. Commissary prices were the best surrogate we could obtain for wholesale prices because of the proprietary nature of wholesale price data. We generally agree with USDA’s concerns that commissary prices may not be fully representative of wholesale prices because they are derived from a single store in each of the markets and are often based on contracts awarded to the lowest-cost bidder. However, we disagree with USDA’s comment that the commissaries we selected were not in close proximity to the markets we analyzed. As displayed in Figure I.1 in appendix I, for almost all of the 31 markets included in our analysis, the commissary that we selected was within the marketing area being analyzed. Moreover, according to industry representatives with whom we shared our analysis, the commissary prices appear to be a valid surrogate for actual wholesale prices. USDA officials also provided us with technical comments that we have incorporated into the report as appropriate. Department of Defense officials advised us that they agreed with the information included in this report on milk prices obtained from defense commissaries and had no other comments. To identify the major factors that influence the price of milk as it moves from the farm to the consumer, we obtained and analyzed studies performed by USDA, universities, and private entities, and spoke to state government and industry representatives and officials at USDA. To report on the various relationships among prices at the farm through retail levels, we obtained information on milk prices in 31 selected markets throughout the United States. These markets were selected because (1) data were available for these locations, (2) they provided geographical coverage for the nation, and (3) they represented a cross section of state- and federally regulated markets. For these markets, we obtained data from USDA, state milk control agencies, the Department of Defense, and a private data collection company on the prices received by farmers, cooperatives, wholesalers, and retailers for January 1996 through February 1998. We limited our data collection efforts to the sale of whole, 2-percent, 1-percent, and fat-free fluid milk sold in gallon containers, which represents about 64 percent of the fluid milk sold in the United States, and we limited our data analysis to 2-percent milk sold in gallon containers. Our analysis of 2-percent milk may not reflect pricing patterns and trends for the other kinds of milk. A more detailed description of our scope and methodology is presented in appendix I. We have attributed all of the data used in our analysis to the source from which they were obtained and have not verified the accuracy of these data because the information from which they were compiled was not available to us. We conducted our review from November 1997 through September 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Senate Committee on Agriculture, Nutrition, and Forestry; the House Committee on Agriculture; other appropriate congressional committees; interested Members of Congress; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VII. In August 1997, Senators Jeffords and Leahy requested GAO to examine a number of issues concerning the pricing and marketing of fluid drinking milk (fluid milk) in the United States. This report provides information on (1) the factors that influence the price of fluid milk as it moves from the farm to the consumer, (2) the portion of the average retail price of fluid milk that is received by farmers, cooperatives, wholesalers, and retailers in selected markets nationwide, (3) changes in farm and retail prices and their effect on the farm-to-retail price spread—which is the difference between the retail and farm prices, (4) the way changes in prices at any given level in the milk marketing chain are reflected in changes in prices at the other levels, and (5) different retail pricing relationships that exist in selected markets among the four kinds of milk—whole, reduced-fat (2-percent), low-fat (1-percent), and fat-free (skim). To identify the major factors that influence the price of milk as it moves from the farm to the consumer, we obtained and analyzed studies performed by researchers at the U.S. Department of Agriculture (USDA), universities, and private entities. These included milk marketing cost studies performed by USDA’s Economic Research Service and researchers at Cornell University, and studies on retail pricing strategies performed by Texas A&M University researchers and various state and private entities. We interviewed officials at USDA, academic researchers, and industry representatives. We were also referred by the International Dairy Foods Association, Food Marketing Institute, and National Milk Producers Federation to some of their members, who spoke to us about the factors that they believe influence the price of milk as it moves from the farm to the consumer. To obtain information on the portion of the retail price received by farmers, cooperatives, wholesalers, and retailers; changes in farm and retail prices and their effect on farm-to-retail price spreads; and the relationship among price changes at different marketing levels, we obtained information on milk prices in 31 selected markets throughout the United States. These markets were selected because (1) data were available for these locations, (2) they provided geographical coverage for the nation, and (3) they represented a cross section of state- and federally regulated markets. For these markets, we collected data on the prices received by farmers, cooperatives, wholesalers, and retailers for January 1996 through February 1998. We limited our data collection efforts to the sale of whole, 2- percent, 1-percent, and fat-free fluid milk sold in gallon containers, which represent about 64 percent of the fluid milk sold in the United States. It is important to recognize that there does not exist a precise method by which to calculate or determine the price received by farmers and cooperatives for fluid milk. It is almost impossible to isolate the value of milk used for one type of product when there are competing demands for milk for use in other products. Any method that is selected will only provide an approximation of the value of milk received by farmers and cooperatives and should not be viewed as a precise measure. For farm-level prices, we used data from USDA’s Agricultural Marketing Service’s (AMS) mailbox price series. The mailbox price is representative of the average price received by dairy farmers who sell milk to cooperatives, bottlers, and processors that are regulated under federal milk marketing orders. The mailbox price is the weighted average of the prices received by all dairy farmers in the market and therefore is computed as the total net dollars received for milk divided by the total pounds of milk marketed. This price includes returns from all milk uses, such as milk used to manufacture cheese and butter, and not just milk used for fluid purposes; therefore, it does not exactly reflect the amount of money received by farmers for milk used only for fluid purposes. AMS developed the mailbox price series in 1995, at the request of the industry, to more accurately reflect the amount of money farmers receive for their unprocessed milk, and it is currently the best data available on the price of milk at the farm level. However, according to AMS officials, this price tends to understate the prices received by farmers for the portion of their milk that is used in fluid milk products. AMS collects data on mailbox prices for 25 of the 31 locations that we analyzed. Four locations that we included in our analysis—Bismarck, North Dakota; Richmond, Virginia; and San Diego and Sacramento, California—are not under a federal milk marketing order, and AMS does not collect mailbox price data for them. As a result, for Bismarck and Richmond, we relied on data collected by the National Agricultural Statistics Service (NASS). NASS collects data for these locations on the price paid to dairy farmers who marketed Grade A milk. This price was adjusted by an amount that represents the national average for typical deductions from dairy farmers’, checks such as hauling costs, cooperative dues, and national dairy board assessments. For San Diego and Sacramento, we used data collected by the California state government. For two other markets—Kansas City, Missouri, and Phoenix, Arizona—although AMS collects mailbox prices, it could not share this information with us because of concerns about data confidentiality. Consequently, for these markets, we used the federal milk marketing order blend price for the Greater Kansas City and the Central Arizona federal milk marketing orders, respectively, adjusted by an amount that represents the national average for typical deductions from dairy farmers’ checks—such as hauling costs, cooperative dues, and national dairy board assessments—to represent mailbox prices. For cooperative-level prices, we used AMS’ announced cooperative prices to represent prices paid by bottlers to cooperatives. Bottlers in federally regulated markets generally purchase milk from cooperatives and pay the federal minimum price for fluid milk plus premiums established by the cooperatives. The announced cooperative price is the Class I milk price announced by the major cooperative in each of the markets. The announced cooperative price does not apply to all Class I sales in these markets and is not necessarily the price actually received for all of the fluid milk sold by the major cooperative. The announced cooperative prices have not been verified as actually having been paid by handlers. For Bismarck, North Dakota; Richmond, Virginia; and San Diego and Sacramento, California; which are state-regulated markets, we used the minimum fluid prices established by the states. Data on premiums paid in excess of these minimums were not available for these markets. For Augusta, Maine, we used the Maine Dealer Cost; for Vermont, New Hampshire, Las Vegas, Nevada, Orlando, Florida, and New York City, New York, we used AMS’ estimated cooperative prices. For wholesale-level prices, we used the prices paid by the Department of Defense’s Commissary Agency under competitive contracts to wholesalers. We used commissary prices as a surrogate for privately established wholesale prices because (1) defense commissaries sell groceries at cost to active and retired military personnel and (2) wholesale price data are considered proprietary by industry officials and were not available to us. According to Defense Commissary officials, the commissary network of stores ranks seventh in the United States in terms of sales volume for supermarket chains. For each of our 31 markets, we selected a nearby commissary, and the Defense Commissary Agency provided us with the monthly wholesale prices paid by the selected commissaries for gallons of whole, 2-percent, 1-percent, and fat-free milk. We recognize that these locations may not provide an ideal match with the other price data analyzed for a given location; however, these were the best wholesale data that we could obtain. In those locations where commissaries sold more than one brand of milk, we obtained the prevailing price for the brand that had the highest sales volume for the period we reviewed. Figure I.1 shows the locations of the 31 selected markets and corresponding commissaries. Washington, D.C. For retail-level prices, we contracted with A.C. Nielsen, a private data collection and analysis company, to obtain average monthly retail prices for gallons of whole, 2-percent, 1-percent, and fat-free milk for supermarkets with yearly sales exceeding $2 million in 25 of the 31 markets included in our analysis. We were able to use these data for 27 of the 31 markets we analyzed because data for A.C. Nielsen’s Boston market also includes data for the Rhode Island and New Hampshire markets. For the other four markets—Augusta, Maine; Bismarck, North Dakota; Las Vegas, Nevada; and Vermont—where privately collected data were not available, we used the retail price data obtained by the milk control agencies for these states. To determine the (1) portion of the retail price that is received by farmers, cooperatives, wholesalers, and retailers; (2) changes in retail and farm prices and their effect on the farm-to-retail price spread; and (3) the way in which changes in prices at any given level in the milk marketing chain are reflected in changes in prices at the other levels, we limited our analysis to 2-percent milk, which currently represents the largest volume of reduced-fat milk sold nationally. Our analysis of 2-percent prices may not necessarily reflect pricing patterns and trends for the other three kinds of milk. Appendix III includes graphs that show the relationships among the farm, cooperative, wholesale, and retail prices for a gallon of 2-percent milk for each of the 31 markets. Because the farm-level and cooperative-level prices reflect a higher butterfat content than is present in 2-percent milk, we adjusted these prices to reflect the value of removing butterfat and replacing it with skim milk. This adjustment allowed us to use farm- and cooperative-level prices that were comparable to the wholesale- and retail-level prices for our analysis. To determine how much farm and retail prices had changed and the effect of these changes on the farm-to-retail price spread from January 1996 through February 1998 for each of the 31 markets, we used a statistical procedure to estimate mailbox and retail prices at the beginning and end of the period. We used estimated rather than actual prices to reduce the influence of the starting and ending dates selected for our analysis in markets in which there is considerable month-to-month variability in milk prices. The differences between the estimated initial and final prices represent the trend changes during the period. In some cases, this difference may be zero because there is no apparent trend. We calculated the change in the farm-to-retail price spread as the estimated retail price difference minus the estimated mailbox price difference. To describe how changes in prices at any given level in the milk marketing chain were reflected in changes in prices at the other levels in each of the 31 markets included in our analysis, we tested for correlations between price changes at the various levels. Specifically, we calculated coefficients describing the degree of correlation between changes in mailbox prices and changes in prices at the cooperative, wholesale, and retail levels; changes in cooperative-level prices and changes in prices at the wholesale and retail levels; and changes in prices at the wholesale and retail levels. In appendix III, we report those correlation coefficients that are statistically different from zero at the 95-percent confidence level. Although we only report correlations between price changes that occur in the same month, we also calculated correlations under the assumption of lagged effects (such as the correlation between changes in wholesale prices in the current month and changes in cooperative-level prices in the previous month). In general, we found fewer markets with statistically significant correlations when we took lagged effects into account, although when effects were assumed to be lagged, the correlations were greater in some markets. To provide information on the relationships among retail prices for all four kinds of milk, we analyzed the retail price data that we obtained from A.C. Nielsen and the state milk control agencies. These data are arrayed in appendix IV for each of the selected 31 markets for January 1996 through February 1998. When we met with USDA officials to obtain their comments on a draft of this report, officials from the Economic Research Service (ERS) suggested an alternative method by which we could better estimate the farm-level price of a gallon of unprocessed milk used for fluid purposes compared to the mailbox price. They believe that the farm value of fluid milk is better defined as the minimum order Class I price minus the difference between the minimum order blend price and the mailbox price. However, this methodology to calculate an alternative farm price assumes that all classes of milk are equally profitable to the cooperative at the prevailing prices. We used the ERS method to calculate farm-level prices for the 24 markets in our sample of 31 markets that are under the federal milk marketing order program. Appendix VI provides our analysis using this alternative farm-level price for fluid milk for these 24 markets. About 6 billion gallons of the approximately 18 billion gallons of milk produced by U.S. dairy farmers each year is processed into fluid milk that has a retail value of about $19 billion. Dairy farmers receive a price for unprocessed milk, and each entity that is involved in the processing and marketing of fluid milk adds value to the product and receives a portion of the difference between the farm and retail price. (This difference is known as the price spread.) This appendix describes how unprocessed milk prices are determined at the farm level and the factors that influence the price of milk as it moves from the farm to the retail level. This appendix also provides information on the costs associated with marketing milk, as estimated by researchers at Cornell University, and industry officials’ views regarding these estimates. We could not obtain specific cost data for our analysis because wholesale and retail cost data in the private sector are proprietary. Prices at the farm level for unprocessed milk sold for use in fluid milk products are determined by supply and demand forces that are influenced by federal and state dairy programs. These programs ensure that dairy farmers receive at least a specified minimum price for their unprocessed milk. The primary federal programs include the milk marketing order and dairy price support programs. The federal and state programs were established to provide a safety net for individual farmers who lack market power compared with other entities, such as wholesalers and retailers. About 75 percent of the milk produced in the United States is regulated under the federal milk marketing order program, which was established in 1937. Some areas, such as California, are not under the federal program and are covered by state programs. In these areas, dairy farmers are paid minimum milk prices that are established by the state government. The milk marketing order program sets minimum prices for unprocessed milk; these minimum prices are based on the way the milk is to be used. Minimum prices paid to farmers for unprocessed fluid grade milk are based on the prices paid to the farmers who produce manufacturing-grade milk in Minnesota and Wisconsin. Manufacturing-grade milk can only be used for making products such as butter and cheese. This milk is not regulated and therefore its market price is based on supply and demand conditions. For milk used for fluid products, a differential is added to the minimum price for milk used for manufacturing purposes. This differential varies throughout the United States and generally increases as the distance increases from Eau Claire, Wisconsin. The extent to which these differentials will affect the average price of milk the farmer receives depends on the extent to which the total supply of milk in a specific market area is used for fluid or manufacturing purposes. Buyers of milk regulated by federal and state orders are permitted to pay farmers prices in excess of the established minimums. Any such excess payments are determined by market forces. In addition to the federal and state milk marketing order programs, the federal dairy price support program, created in 1949, supports farm-level prices by providing a standing offer to purchase butter, cheese, and nonfat dry milk at specified prices. The prices offered for these dairy products are intended to provide sufficient revenue so that dairy product manufacturers can pay farmers the legislatively mandated support price. This program has the effect of providing a floor for the price of milk used for manufacturing purposes. As a result, it influences the price that farmers receive for milk used for fluid purposes under the milk marketing order program because the support price sets a floor below which manufacturing grade milk prices are unlikely to fall for very long. The price support program offers a safety net to all dairy farmers, including those not participating in federal or state milk marketing orders. The price that cooperatives charge wholesalers for fluid milk is influenced by the minimum price established by federal and state milk marketing order programs, the services that the cooperatives provide, and the relative market power of cooperatives and processors. About 86 percent of all milk produced by farmers is marketed through dairy cooperatives that are owned and financed by farmer-members.Cooperatives can either (1) process, package, and distribute the resulting fluid milk to retail outlets for sales to consumers or (2) sell the unprocessed milk to wholesalers who process, package, and distribute fluid milk for sale to retail outlets. Cooperatives operate like corporate businesses to perform services for their members. Some distinctive features of cooperatives include member-user ownership and control, services at cost to their members, and distribution of income to their members on the basis of their patronage. Cooperatives generally sell unprocessed milk to wholesalers at prices that are higher than the federal or state minimums for fluid milk. In federally or state-regulated markets, any differences between the prices charged to the wholesalers and the milk marketing order minimums are known as over-order premiums. These premiums are primarily determined by the marketplace rather than the federal milk marketing order program. Over-order premiums compensate cooperatives for the services they provide, such as transporting milk from different milk-producing areas, and also reflect the market power acquired by cooperatives relative to processors. The price that wholesalers charge retailers for fluid milk is influenced by the costs they incur for processing, packaging, and distributing fluid milk to retail outlets and by the wholesalers’ interest in earning a return on investment. Wholesalers include bottlers and major retail food chains with bottling plants. Processing services provided by wholesalers include pasteurization, homogenization, and the standardization of butterfat and nonfat solids in flavored milks, buttermilk, whole, 2-percent, 1-percent, and fat-free milk. Wholesalers package these products into a variety of types and sizes of containers and arrange for their distribution to retail outlets for sale to consumers. In addition to shipping the products to retailers, some wholesalers provide different levels of in-store service, according to industry officials. For example, some wholesalers provide a full range of services to retailers—unloading the milk at the store dock, restocking the dairy case, and removing outdated and/or leaking containers. In contrast, other wholesalers may not provide any services to retailers beyond their shipping docks. Differences in wholesale-level prices reflect differences in any or all of these factors. Furthermore, according to dairy industry officials we spoke to, in some highly regulated markets, state regulations may increase both wholesale and retail prices. For example, an official from one of the nation’s largest grocery wholesalers told us that in North Dakota the distribution of milk is restricted to wholesalers approved by the state. Consequently, milk can only be delivered to retail stores on trucks owned by an approved wholesaler. This requirement prohibits retailers and nonapproved wholesalers from using their own trucks to deliver milk to retail stores, which in some cases may be a less costly and more economical delivery method. Typically, wholesale cost and pricing data in the private sector are not available to the public because such data are considered proprietary. In addition, any sharing of cost or pricing data with competitors or others could be considered a violation of state and/or federal antitrust laws. In the absence of specific cost data, we obtained cost estimates from researchers at Cornell University that are based on information obtained from a survey of wholesalers. The researchers surveyed 35 well-managed plants, operated by 23 companies. The 35 sampled plants included 22 proprietary plants, 5 cooperative plants, and 8 supermarket-owned plants. The plants processed an average of 28 million pounds of milk per month, ranging from slightly more than 13 million pounds to more than 51 million pounds. Distribution cost estimates were based on large accounts served by the plants, including supermarkets, large convenience stores, and club stores. On the basis of the survey data they received from the wholesalers, the Cornell researchers estimated the cost of marketing fluid milk products. For example, the researchers estimated that in 1995, it cost about $2.12 to market a gallon of 2-percent milk, through a large supermarket in the New York metropolitan area. (This amount also included an estimated 19 cents per gallon for handling costs incurred by the retailer and the retailer’s return on investment.) Figure II.1 shows the various costs associated with marketing a gallon of 2-percent milk through supermarkets in the New York metropolitan area in 1995. According to officials of dairy cooperatives, wholesalers, and retailers of fluid milk, the estimates developed by the researchers at Cornell University were generally representative of the cost of performing fluid milk processing and marketing functions. However, they told us that the costs for distributing milk in other markets could be significantly higher than the 10 cents per gallon estimated in the study for the New York metropolitan area. In particular, distribution costs in rural markets could range as high as 25 to 40 cents a gallon. According to one industry official, delivery costs to rural markets are higher than some urban markets because wholesalers have to deliver smaller quantities of fluid milk products to more isolated, rural stores. These additional costs are often reflected in higher wholesale and retail prices for fluid milk in these areas. Prices that consumers pay retailers for fluid milk are influenced not only by supply and demand considerations that determine the overall retail-level market price for fluid milk but also by specific considerations that affect prices at individual retail outlets. With respect to determining the overall price charged at the retail level, the quantity of fluid milk supplied by retailers is influenced by the prices that retailers have to pay wholesalers to acquire the product, retailers’ operating costs, such as labor, rent, and utilities, and their required return on investment. On the other hand, the amount of fluid milk that consumers want to purchase at the retail level is influenced not only by the price of fluid milk, but also by such factors as the size, age, and income levels of the population in the marketing area, and the prices of substitutes. Studies performed by economists and others over the years have shown that the demand for milk at the retail level is relatively insensitive to changes in the price for fluid milk because of the lack of close substitutes. Generally, these studies have concluded that a 10-percent increase or decrease in the price of fluid milk will result in less than a 10-percent decrease or increase in the quantity that consumers will purchase. However, several industry officials told us that they believe that the demand for milk in recent years has become more sensitive to price changes than previously thought. For individual retail outlets, additional considerations may influence the manner in which retail prices for milk are set. To meet their stores’ goals, such as profit maximization and increased market share, retailers may use a number of strategies for pricing fluid milk. In developing these pricing strategies, retailers consider their retailing costs, the prices charged by their competitors, the role that milk prices play in attracting customers to their stores, the convenience offered by their store compared with other stores, and their desire to build an image of quality or low prices for their stores. Those retail pricing strategies that are primarily based on a retailer’s operating costs are generally referred to as vertical pricing strategies, whereas those strategies that are based on responding to prices charged by competitors are referred to as horizontal pricing strategies. Retailers may use a combination of horizontal and vertical pricing strategies when setting prices for fluid milk. A vertical pricing strategy is a method by which retailers price fluid milk products to cover the costs associated with operating their stores. The retail price is set in a manner that allows them to recoup (1) the price paid to fluid milk wholesalers; (2) operating costs such as rent, labor, interest expense, and general overhead; and (3) a return on investment. Some retailers use a variable markup strategy that allows them to charge different markups on various products sold in their stores while seeking an overall profit margin target for the store. For example, a retailer who wishes to promote a low-cost image for the store may sell gallons of 2-percent milk at or near cost while raising the price of other items in the store. On the other hand, retailers wishing to increase the profitability of the dairy section might maintain relatively high prices for fluid milk but set lower prices for other items in the store. According to state and dairy industry officials, the latter situation has existed during the last couple of years in the Minneapolis market. Retailers’ pricing strategy choices will depend on their views about the importance of milk prices versus the prices of other products sold in their stores in influencing consumers’ overall perceptions about their stores. A horizontal pricing strategy is a method by which retailers set fluid milk prices in response to the prices being charged by competitors in their area. Retailers pursuing such strategies are very sensitive to price levels at neighboring retail outlets and will adjust their prices accordingly to create an image of lower or more competitive prices. Also, retailers pursuing horizontal pricing strategies may be less sensitive to market signals on the wholesale price of milk. Instead, retailers may continue to price milk at a certain price even though wholesalers have either increased or decreased prices. Furthermore, retail prices may be influenced by state regulations and customers’ desire for convenience or high quality. For example, dairy industry officials contend that state regulations that prohibit wholesalers and retailers from selling milk below cost result in higher retail milk prices. Such regulations prevent retailers from using lower-priced milk as a means of attracting more customers to their stores. In addition, according to industry officials, retail prices for fluid milk may be influenced by the fact that some consumers are willing to pay a higher price for convenience and quality. For example, convenience stores sell only a limited number of items, which allows consumers to quickly purchase fluid milk and spend the least amount of time necessary in the store. As a result, these stores can charge a higher price for fluid milk than supermarkets charge primarily because of the convenience that they provide. Similarly, retail stores that emphasize high-quality products may stock widely recognized brand-label fluid milk products for which their customers are willing to pay a higher price because they associate the brand label with better quality. Industry officials told us that some retailers believe that a stable retail price for milk may also help create an impression of high quality. This appendix summarizes our analysis of farm-to-retail prices for a gallon of 2-percent milk in 31 selected markets nationwide for January 1996 through February 1998. Our analysis includes information on (1) the portion of the retail price that is received by farmers, cooperatives, wholesalers, and retailers; (2) changes to retail and farm-level prices and their effect on the farm-to-retail price spread; and (3) the way in which changes in prices at any given level in the milk marketing chain are reflected in changes in prices at the other levels. We limited our analysis to gallons of 2-percent milk because in recent years sales of reduced-fat milk products have increased and account for about 63 percent of all retail sales of fluid milk. Of this amount, sales of 2-percent milk account for about 55 percent of the total sales of reduced-fat milk. The farm and cooperative prices included in our analysis and presented in this appendix are adjusted for 2-percent butterfat. Our analysis of 2-percent milk prices may not reflect pricing patterns and trends for the other three kinds of milk. Complete data on prices for all four kinds of milk—whole, 1-percent, fat-free as well as 2-percent—are presented in appendix V. From January 1996 through February 1998, for the 31 fluid milk markets that we reviewed, farmers, on average, received 42 percent of the retail price for a gallon of 2-percent milk, cooperatives received 10 percent, wholesalers received 31 percent, and retailers received 17 percent. However, the portion received at any one level in the marketing chain varied substantially among markets. For example, the portion of the average retail price that farmers received ranged from about 31 to 54 percent and the portion that retailers received ranged from about 4 to 31 percent in the markets we reviewed, with one exception. In commenting on a draft of this report, USDA officials stated that because the farm-level and cooperative-level prices used in our analysis are not a measure of the prices that farmers and cooperatives actually receive for milk used for fluid products, our results understate the amount received by farmers and overstate the cooperatives’ share of the retail price, especially in some markets. Moreover, a portion of the amount received by the cooperatives may be used by the cooperative to pay for some costs, such as hauling, in some markets. Table III.1 provides these data for each of the selected markets. Augusta, Me. Boston, Mass. New Haven, Conn. New York, N.Y. Philadelphia, Pa. Washington, D.C. Atlanta, Ga. Louisville, Ky. Miami, Fla. New Orleans, La. Orlando, Fla. Raleigh, N.C. Richmond, Va. Chicago, Ill. Cincinnati, Oh. Milwaukee, Wis. Bismarck, N.D. Kansas City, Mo. Minneapolis, Minn. Dallas, Tex. Houston, Tex. Denver, Colo. Las Vegas, Nev. Phoenix, Ariz. Seattle, Wash. Sacramento, Calif. (continued) Salt Lake City, Utah San Diego, Calif. From January 1996 through February 1998, retail fluid milk prices remained constant or increased in 27 markets and decreased in 4 markets. In contrast, farm prices decreased in 27 markets and remained constant in 4 markets. As a result of these price changes, the farm-to-retail price spread increased in 27 of the 31 markets over the 26-month period. Table III.2 provides these data for each of the selected markets. Augusta, Me. $(.09) Boston, Mass. (.09) (.09) New Haven, Conn. (.09) New York, N.Y. (.14) Philadelphia, Pa. (.13) (.09) (.09) Washington, D.C. (.13) (.08) (continued) Atlanta, Ga. (.16) Louisville, Ky. (.15) Miami, Fla. New Orleans, La. (.16) Orlando, Fla. Raleigh, N.C. (.16) Richmond, Va. (.14) Chicago, Ill. (.15) (.15) (1.23) (1.08) Milwaukee, Wis. (.15) Bismarck, N.D. (.14) Kansas City, Mo. (.16) Minneapolis, Minn. (.15) (.09) Dallas, Tex. (.12) Houston, Tex. (.12) Denver, Colo. (.18) Las Vegas, Nev. (.19) Phoenix, Ariz. (.17) Seattle, Wash. (.12) Sacramento, Calif. Salt Lake City, Utah (.19) San Diego, Calif. (.33) (.33) Indicates that no statistically significant change was observed in the price over the 26-month period and therefore represents a constant price for that market. The strongest correlation between changes in prices occurs between any level and its adjacent level in the marketing chain. For example, in most of the markets we analyzed, there was a strong correlation between changes in farm prices and changes in cooperative prices. Similarly, changes in wholesale prices were generally reflected in changes in retail prices. In contrast, changes in the prices received by farmers less frequently correlated with changes in retail prices than they were with changes in cooperative or wholesale prices. This is because, as discussed in appendix II, many factors other than farm or wholesale prices influence the price of fluid milk at the retail level. Tables III.3 through III.5 present data on our correlation analysis for price changes at the four marketing levels. The values of the correlation coefficients presented represent estimates of the degree to which price changes at one level in the milk marketing chain are associated with price changes at other levels. Augusta, Me. Boston, Mass. New Haven, Conn. New York, N.Y. Philadelphia, Pa. Washington, D.C. Atlanta, Ga. Louisville, Ky. Miami, Fla. New Orleans, La. Orlando, Fla. Raleigh, N.C. Richmond, Va. Chicago, Ill. Milwaukee, Wis. Bismarck, N.D. Kansas City, Mo. Minneapolis, Minn. Dallas, Tex. (continued) Houston, Tex. Denver, Colo. Las Vegas, Nev. Phoenix, Ariz. Seattle, Wash. Sacramento, Calif. Salt Lake City, Utah San Diego, Calif. *Indicates that the correlation coefficient is statistically significant (i.e. p < .05). However, we have not included the p-values in the table. Augusta, Me. Boston, Mass. New Haven, Conn. New York, N.Y. Philadelphia, Pa. Washington, D.C. Atlanta, Ga. Louisville, Ky. Miami, Fla. New Orleans, La. Orlando, Fla. Raleigh, N.C. Richmond, Va. Chicago, Ill. (continued) Milwaukee, Wis. Bismarck, N.D. Kansas City, Mo. Minneapolis, Minn. Dallas, Tex. Houston, Tex. Denver, Colo. Las Vegas, Nev. Phoenix, Ariz. Seattle, Wash. Sacramento, Calif. Salt Lake City, Utah San Diego, Calif. *Indicates that the correlation coefficient is statistically significant (i.e. p < .05). However, we have not included the p-values in the table. Augusta, Me. Boston, Mass. New Haven, Conn. New York, N.Y. Philadelphia, Pa. Washington, D.C. Atlanta, Ga. Louisville, Ky. (continued) Miami, Fla. New Orleans, La. Orlando, Fla. Raleigh, N.C. Richmond, Va. Chicago, Ill. Milwaukee, Wis. Bismarck, N.D. Kansas City, Mo. Minneapolis, Minn. Dallas, Tex. Houston, Tex. Denver, Colo. Las Vegas, Nev. Phoenix, Ariz. Seattle, Wash. Sacramento, Calif. Salt Lake City, Utah San Diego, Calif. *Indicates that the correlation coefficient is statistically significant (i.e. p < .05). However, we have not included the p-values in the table. Figures III.1 through III.31 present data for January 1996 through February 1998 on farm, cooperative, wholesale, and retail prices for gallons of 2-percent milk in each of the 31 markets for which we obtained data. Gaps in any of the lines shown in the following graphs are the result of unavailable data. This appendix provides information on the average retail price for whole, 2-percent, 1-percent, and fat-free milk in 31 selected markets for January 1996 through February 1998. We found that retail pricing relationships among the four kinds of milk varied significantly in the markets we analyzed. For example, in the Seattle market from January 1996 through February 1998, the average price for 2-percent milk was consistently lower than the average price for whole, 1-percent, or fat-free milk. On the other hand, for this period in the Minneapolis market, the average retail price for fat-free milk was consistently lower than the price of whole, 2-percent, or 1-percent milk. In other markets, such as New York, the lowest-priced milk shifted among 2-percent, 1-percent, and fat-free milk over the same period. Figures IV.1 through IV.31 provide information on the average retail price for each of the four kinds of milk for the 31 markets for January 1996 through February 1998. This appendix provides data for January 1996 through February 1998 on the average monthly retail-, wholesale-, cooperative-, and farm-level prices for a gallon of whole, 2-percent, 1-percent, and fat-free milk, for 31 selected markets. These data are presented in table V.1 through table V.31. Table V.1: Atlanta, Ga., Market Milk Prices Per Gallon—Retail and Wholesale Prices, and Announced Cooperative Class I and Mailbox Prices (Both Unadjusted for Butterfat Content), 1996-February 1998 Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Table V.9: Houston, Tex., Market Milk Prices Per Gallon—Retail and Wholesale Prices, and Announced Cooperative Class I and Mailbox Prices (Both Unadjusted for Butterfat Content), 1996-February 1998 Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Product not carried by the commissary surveyed. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Table V.16: New Hampshire Market Milk Prices Per Gallon—Retail and Wholesale Prices, and Announced Cooperative Class I and Mailbox Prices (Both Unadjusted for Butterfat Content), 1996-February 1998 Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Table V.22: Phoenix, Ariz., Market Milk Prices Per Gallon—Retail and Wholesale Prices, and Announced Cooperative Class I and Blend Prices (Both Unadjusted for Butterfat Content), 1996-February 1998 Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Table V.24: Rhode Island Market Milk Prices Per Gallon—Retail and Wholesale Prices, and Announced Cooperative Class I and Mailbox Prices (Both Unadjusted for Butterfat Content), 1996-February 1998 Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Table V.26: Sacramento, Calif., Market Milk Prices Per Gallon—Retail and Wholesale Prices, and California Class I and Mailbox Prices (Both Unadjusted for Butterfat Content), 1996-February 1998 Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. Mar. Apr. Aug. Sept. Oct. Nov. Dec. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Avg. Data not available. This appendix provides the results of our analysis using an alternative method for calculating farm-level prices for fluid milk. When we met with USDA officials to obtain their comments on a draft of this report, officials from the Economic Research Service (ERS) suggested an alternative method by which we could better estimate the farm-level price of a gallon of fluid milk compared with the mailbox price. These officials believe that the farm value of fluid milk is better defined as the minimum order Class I price minus the difference between the minimum order blend price and the mailbox price. This methodology for an alternative farm-level price assumes that all classes of milk are equally profitable to the cooperative at the prevailing prices. ERS officials acknowledged that this is not the case in some markets. Nevertheless, they believe that their recommended approach may be the only practical method by which to determine the farm-level value of fluid milk within the reporting timeframes. Because the ERS method relies on the federal minimum order Class I price, we could only apply it to those markets in our sample of 31 selected markets that are under the federal milk marketing order program. Consequently, this appendix provides data for only 24 of the 31 markets that we included in our analysis in appendixes III and IV. In comparing the results of the alternative farm-level price analysis to our analysis using the mailbox price series we found the following: Using the alternative farm-level price, farmers received a greater portion and cooperatives received a smaller portion of the retail price compared to what they received under our analysis using the mailbox price series. We found that by using the alternative farm-level price farmers, on average, received 47 percent of the retail price for a gallon of 2-percent milk instead of 43 percent, and cooperatives received 6 percent of the retail price instead of 10 percent in the 24 markets from January 1996 through February 1998. Consistent with the results of our analysis using the mailbox price series we found that farm prices decreased or stayed constant in all of the 24 markets over the 26-month period that we analyzed. Consequently, over this period the farm-to-retail price spread increased in 22 of the 24 markets. However, as would be expected, we found that some of the values changed, for increases or decreases in farm prices and the farm-to-retail price spread over the 26-month period, when we used the alternative farm price. For example, the farm-level prices in several markets in the North Atlantic Region, such as Vermont and Boston, Mass., remained constant when we used the alternative farm-level price instead of decreasing as they did when we used the mailbox price series. As a result, when we used the alternative farm-level price the farm-to-retail price spread in these markets was not as large as when we used the mailbox price. Also, as in our mailbox price analysis, we found a strong correlation between changes in prices at the farm-level and changes in cooperative prices. (Using the alternative farm-level price, we found a strong correlation between changes in farm prices and changes in cooperative prices in all 24 markets whereas when we used mailbox prices we found a strong correlation in 21 of the 24 markets.) However, the degree of correlation was greater in almost all of the 24 markets when we used the alternative farm-level price. Moreover, as would be expected, when we used the alternative farm-level price the strongest correlation still occurred between changes in prices at a given level in the milk marketing chain and changes in prices at the adjacent level. Table VI.1 through VI.4 present data for a gallon of 2-percent milk for January 1996 through February 1998, using the ERS method to calculate the alternative farm price. Specifically these tables detail (1) the farm value of a gallon of 2-percent milk (see table VI.1), (2) the portion of the retail price of a gallon of 2-percent milk that is received by farmers and cooperatives (see table VI.2), (3) changes in farm and retail prices and their effect on the price spread for 2-percent milk (see table VI.3), and (4) the way changes in the alternative farm price are reflected in changes in prices at the cooperative, wholesale, and retail levels (see table VI.4). Table VI.1: Comparison of the Mailbox Price With the Alternative Farm Price for a Gallon of Milk Adjusted to Two Percent Butterfat Content for Twenty-Four Markets, 1996-February 1998 Feb. Mar. Apr. Aug. Sept. Oct. Nov. Aug. Sept. Oct. Nov. Dec. Jan. Feb. ($0.01) (continued) Feb. Mar. Apr. Aug. Sept. Oct. Nov. Aug. Sept. Oct. Nov. Dec. Jan. Feb. ($0.01) ($0.01) (continued) Feb. Mar. Apr. Aug. Sept. Oct. Nov. Aug. Sept. Oct. Nov. Dec. Jan. Feb. Boston, Mass. New Haven, Conn. New York, N.Y. Philadelphia, Pa. Washington, D.C. Atlanta, Ga. Louisville, Ky. Miami, Fla. New Orleans, La. Orlando, Fla. Raleigh, N.C. Chicago, Ill. Milwaukee, Wis. Minneapolis, Minn. Dallas, Tex. Houston, Tex. Denver, Colo. Las Vegas, Nev. Seattle, Wash. Boston, Mass. New Haven, Conn. New York, N.Y. $(.16) Philadelphia, Pa. (.15) Washington, D.C. (.15) (.08) Atlanta, Ga. (.16) Louisville, Ky. (.15) Miami, Fla. New Orleans, La. (.16) Orlando, Fla. Raleigh, N.C. (.17) Chicago, Ill. (.17) (.15) (1.23) (1.08) Milwaukee, Wis. (.17) Minneapolis, Minn. (.16) (.09) Dallas, Tex. (.15) Houston, Tex. (.15) Denver, Colo. (.18) Las Vegas, Nev. (.20) Seattle, Wash. (.17) Salt Lake City, Utah (.19) (Table notes on next page) Indicates that no statistically significant change was observed in the price over the 26-month period and therefore represents a constant price for that market. Boston, Mass. New Haven, Conn. New York, N.Y. Philadelphia, Pa. Washington, D.C. Atlanta, Ga. Louisville, Ky. Miami, Fla. New Orleans, La. Orlando, Fla. Raleigh, N.C. Chicago, Ill. Milwaukee, Wis. Minneapolis, Minn. Dallas, Tex. Houston, Tex. Denver, Colo. Las Vegas, Nev. Seattle, Wash. Salt Lake City, Utah *Indicates that the correlation coefficient is statistically significant (i.e. p < .05). However, we have not included the p-values in the table. Anu K. Mittal, Assistant Director Dale A. Wolden, Evaluator-in-Charge James L. Dishmon, Jr. Curtis L. Groves Jay L. Scott Sheldon H. Wood, Jr. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed issues concerning the pricing and marketing of fluid milk, focusing on: (1) the factors that influence the price of fluid milk as it moves from the farm to the consumer; (2) the portion of the average retail price of a gallon of fluid milk that is received by farmers, cooperatives, wholesalers, and retailers in selected markets across the country; (3) changes in farm and retail prices and their effect on the farm-to-retail price spread; (4) the way changes in prices at any given level in the milk marketing chain are reflected in changes in prices at the other levels; and (5) different retail pricing relationships that exist in selected markets among the four kinds of milk. GAO noted that: (1) at all levels in the fluid milk marketing chain, prices are determined by the interaction of numerous supply and demand factors; (2) the supply available at any given level is influenced by: (a) the costs incurred by the entities involved in the production, processing, and marketing of fluid milk; (b) the government policies that establish minimum prices for unprocessed milk used to produce fluid milk; (c) competitive conditions in the marketplace; (d) the market power acquired by the entities involved; and (e) the price of milk; (3) similarly, the amount demanded at any given level is influenced by the size, age, and income levels of the population in the marketing area, and the prices of fluid milk and substitute goods; (4) furthermore, the prices that retailers receive for fluid milk are influenced not only by their operating costs and return on investment but also by other factors, such as the pricing strategies used by competitors; (5) from January 1996 through February 1998, for the 31 fluid milk markets that GAO reviewed, on average, farmers received 42 percent of the retail price for a gallon of 2-percent milk, cooperatives received 10 percent, wholesalers received 31 percent, and retailers received 17 percent; (6) however, the portion received at any one level in the marketing chain varied substantially among markets; (7) from January 1996 through February 1998, retail prices for a gallon of 2-percent milk remained constant or increased in 27 markets and decreased in 4 market; (8) in contrast, farm prices decreased in 27 markets and remained constant in 4 markets; (9) as a result of these price changes, the farm-to-retail price spread increased in 27 of the 31 markets over the 26-month period GAO reviewed; (10) changes in prices at any given level in the milk marketing chain were most often reflected in changes in prices at the next level, as might be expected; (11) similarly, changes in wholesale prices generally correlated with changes in retail prices; (12) in contrast, changes in prices received by farmers less frequently correlated with changes in retail prices than they did with changes in cooperative or wholesale prices; and (13) retail pricing relationships among the four kinds of milk varied significantly in the markets GAO analyzed.
Since the September 11, 2001, terrorist attacks there has been concern that certain radioactive material could be used in the construction of a radiological dispersion device (RDD). An RDD disperses radioactive material over a particular target area, which could be accomplished using explosives or by other means. The major purpose of an RDD would be to create terror and disruption, not death or destruction. Depending on the type, form, amount, and concentration of radioactive material used, direct radiation exposure from an RDD could cause health effects to individuals in proximity to the material for an extended time; for those exposed for shorter periods and at lower levels, it could potentially increase the long- term risks of cancer. In addition, the evacuation and cleanup of contaminated areas after dispersal could lead to panic and serious economic costs on the affected population. In 2003, a joint NRC/Department of Energy (DOE) interagency working group identified several radioactive materials (including Americium-241 and Cesium-137) as materials at higher risk of being used in an RDD, describing these as “materials of greatest concern.” In its risk-based approach to securing radioactive sources, NRC has made a commitment to work toward implementing the provisions of IAEA’s Code of Conduct. This document provides a framework that categorizes the relative risk associated with radioactive sources. While NRC has recently focused on upgrading its capacity to track, monitor, and secure category 1 and 2 sources, which are considered high risk, category 3 sources are not a primary focus of NRC regulatory efforts. Category 3 sources include byproduct material, which is radioactive material generated by a nuclear reactor, and can be found in equipment that has medical, academic, and industrial applications. For example, a standard type of moisture gauge used by many construction companies contains small amounts of Americium-241 and Cesium-137. According to NRC, it would take 16 curies of Americium-241 to constitute a high-risk category 2 quantity, and 1.6 curies of Americium-241 is considered a category 3 quantity. In October and November 2006, using fictitious names, our investigators created two bogus companies—one in an agreement state and one in a non-agreement state. After the bogus businesses were incorporated, our investigators prepared and submitted applications for a byproduct materials license to both NRC and the department of the environment for the selected agreement state. The applications, mailed in February 2007, were identical except for minor differences resulting from variations in the application forms. Using fictitious identities, one investigator represented himself as the company president in the applications, and another investigator represented himself as the radiation safety officer. The license applications stated that our company intended to purchase machines with sealed radioactive sources. According to NRC guidance finalized in November 2006 and sent to agreement states in December 2006, both NRC and agreement state license examiners should consider 12 screening criteria to verify that radioactive materials will be used as intended by a new applicant. For example, one criterion suggests that the license examiner perform an Internet search using common search engines to confirm that an applicant company appears to be a legitimate business that would require a specific license. Another screening technique calls for the license examiner to contact a state agency to confirm that the applicant has been registered as a legitimate business entity in that state. If the examiner believes there is no reason to be suspicious, he or she is not required to take the steps suggested in the screening criteria and may indicate “no” or “not applicable” for each criteria. If the license examiner takes additional steps to evaluate a criterion, he or she should indicate what publicly available information was considered. If there is concern for a potential security risk, the guidance instructs license examiners to note the basis for that concern. Nine days after mailing their application form to NRC, our investigators received a call from an NRC license examiner. The NRC license examiner stated that the application was deficient in some areas and explained the necessary corrections. For example, the license examiner asked our investigators to certify that the machines containing sealed radioactive source material, which are typically used at construction sites, would be returned to the company office before being transported to a new construction site. The license examiner explained that this was a standard security precaution. Even though we did not have a company office or a construction site, our investigators nevertheless certified their intent to bring the machines back to their office before sending them to a new location. They made this certification via a letter faxed to NRC. Four days after our final correction to the license application, NRC approved our application and mailed the license to the bogus business in the non- agreement state. It took a total of 4 weeks to obtain the license. See figure 1 for the first page of the transmittal letter we received from NRC with our license. The NRC license is printed on standard 8-1/2 x 11 inch paper and contains a color NRC seal for a watermark. It does not appear to have any features that would prevent physical counterfeiting. We therefore concluded that we could alter the license without raising the suspicion of a supplier. We altered the license so that it appeared our bogus company could purchase an unrestricted quantity of sealed source materials rather than the small amounts of Americium-241 and Cesium-137 listed on the original license. We determined the proper language for the license by reviewing publicly available information. Next, we contacted two U.S. suppliers of the machines specified in our license. We requested price quotes and faxed the altered license to the suppliers as proof that we were certified to purchase the machines. Both suppliers offered to sell us the machines and provided us price quotes. One of these suppliers offered to provide twice as many machines as we requested and offered a discount for volume purchases. In a later telephone call to one of the suppliers, a representative of the supplier told us that his company does not check with NRC to confirm the terms listed on the licenses that potential customers fax them. He said that his company checks to see whether a copy of the front page of the license is faxed with the intent to purchase and whether the requested order exceeds the maximum allowable quantity a licensee is allowed to possess at any one time. Although we had no legitimate use for the machines, our investigators received, within days of obtaining a license from NRC, price quotes and terms of payment that would have allowed us to purchase numerous machines containing sealed radioactive source materials. These purchases would have substantially exceeded the limit that NRC approved for our bogus company. If these radioactive materials were unsealed and aggregated together, the machines would yield an amount of Americium-241 that exceeds the threshold for category 3 materials. As discussed previously, according to IAEA, category 3 sources are dangerous if not safely managed or securely protected and “could cause permanent injury to a person who handled them, or was otherwise in contact with them, for some hours. It could possibly—although it is unlikely—be fatal to be close to this amount of unshielded radioactive material for a period of days to weeks.” Importantly, with patience and the proper financial resources, we could have accumulated, from other suppliers, substantially more radioactive source material than what the two suppliers initially agreed to ship to us—potentially enough to reach category 2. According to IAEA, category 2 sources, if not safely managed or securely protected, “could cause permanent injury to a person for a short time (minutes to hours), and it could possibly be fatal to be close to this amount of unshielded material for a period of hours to days.” Ten days after mailing their application form to the agreement state’s department of environment, our investigators received a call from a department license examiner. The license examiner stated that the application was deficient in some areas and said that she would send us a letter outlining what additional information the state required before approving the license. The examiner further stated that before the license was granted, she would conduct a site visit to inspect the company office and storage facilities cited in our application. Our investigators subsequently decided not to pursue the license in this state and requested that their application be withdrawn. According to an official in the department of environment for this state, the license examiner followed the required state procedure in requesting a site visit. The official told us that as a matter of long-standing state policy, license examiners in this state conduct site visits and interview company management (especially radiation safety officers) before granting new licenses for radioactive materials. This state policy is more stringent than the guidance NRC provided agreement states in December 2006. The NRC guidance identified a site visit as one possible screening criterion to use in evaluating a new license application, but, as discussed above, a site visit is not required under the NRC guidance. On June 1, 2007, we contacted NRC and discussed the results of our work. An NRC official indicated that NRC would take immediate action to address the weaknesses we identified. After this meeting, we learned that NRC suspended its licensing program for specific licenses until it could determine what corrective actions were necessary to resolve the weaknesses. NRC also held a teleconference with a majority of the 34 agreement states to discuss our work. On June 12, 2007, NRC issued supplemental interim guidance with additional screening criteria. These criteria are intended to help a license examiner determine whether a site visit or face-to-face meeting with new license applicants is required. NRC told us that it planned to convene a working group to develop improved guidance addressing the weaknesses we identified. NRC’s goal is to provide licenses to only those entities that can demonstrate that they have legitimate uses for radioactive materials. However, our work shows that there continues to be weaknesses in the process NRC uses to approve license applications. In our view, a routine visit by NRC staff to the site of our bogus business would have been enough to reveal our lack of facilities and equipment. Furthermore, if NRC license examiners had conducted even a minimal amount of screening— such as performing common Web searches or making telephone calls to local government or business offices—they would have developed serious doubts about our application. Once we received our license, the ease with which we were able to alter the license and obtain price quotes and commitments to ship from suppliers of radioactive materials is also cause for concern. Accordingly, we are making the following three recommendations to the Chairman of the NRC: First, to avoid inadvertently allowing a malevolent individual or group to obtain a license for radioactive materials, NRC should develop improved guidance for examining NRC license applications. In developing improved screening criteria, NRC should consider whether site visits to new licensees should be mandatory. These improved screening criteria will allow NRC to provide reasonable assurance that licenses for radioactive materials will only be issued to those with legitimate uses. Second, NRC should conduct periodic oversight of license application examiners so that NRC will be assured that any new guidance is being appropriately applied. Third, NRC should explore options to prevent individuals from counterfeiting NRC licenses, especially if this allows the purchase of more radioactive materials than they are approved for under the terms of the original license. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov or Gene Aloise at (202) 512-3841 or aloisee@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Nuclear Regulatory Commission (NRC) regulates domestic medical, industrial, and research uses of sealed radioactive sources. Organizations or individuals attempting to purchase a sealed source must apply for a license and gain the approval of either NRC or an "agreement state." To become an agreement state, a state must demonstrate to NRC that its regulatory program is compatible with NRC regulations and is effective in protecting public health and safety. NRC then transfers portions of its authority to the agreement state. In 2003, GAO reported that weaknesses in NRC's licensing program could allow terrorists to obtain radioactive materials. NRC took some steps to respond to the GAO report, including issuing guidance to license examiners. To determine whether NRC actions to address GAO recommendations were sufficient, the Subcommittee asked GAO to test the licensing program using covert investigative methods. By using the name of a bogus business that existed only on paper, GAO investigators were able to obtain a genuine radioactive materials license from NRC. Aside from traveling to a non-agreement state to pick up and send mail, GAO investigators did not need to leave their office in Washington, D.C., to obtain the license from NRC. Further, other than obtaining radiation safety officer training, investigators gathered all the information they needed for the license from the NRC Web site. After obtaining a license from NRC, GAO investigators altered the license so it appeared that the bogus company could purchase an unrestricted quantity of radioactive sealed sources rather than the maximum listed on the approved license. GAO then sought to purchase, from two U.S. suppliers, machines containing sealed radioactive material. Letters of intent to purchase, which included the altered NRC license as an attachment, were accepted by the two suppliers. These suppliers gave GAO price quotes and commitments to ship the machines containing radioactive materials. The amount of radioactive material we could have acquired from these two suppliers was sufficient to reach the International Atomic Energy Agency's (IAEA) definition of category 3. According to IAEA, category 3 sources are dangerous if not safely managed or securely protected. Importantly, with patience and the proper financial resources, we could have accumulated substantially more radioactive source material. GAO also attempted to obtain a license from an agreement state, but withdrew the application after state license examiners indicated they would visit the bogus company office before granting the license. An official with the licensing program told GAO that conducting a site visit is a standard required procedure before radioactive materials license applications are approved in that state. As a result of this investigation, NRC suspended its licensing program until it could determine what corrective actions were necessary to resolve the weaknesses GAO identified. On June 12, 2007, NRC issued supplemental interim guidance with additional screening criteria. These criteria are intended to help a license examiner determine whether a site visit or face-to-face meeting with new license applicants is required.
Oversight of nursing homes is a shared federal-state responsibility. Based on statutory requirements, CMS (1) defines quality standards that nursing homes must meet to participate in the Medicare and Medicaid programs and (2) contracts with state survey agencies to assess whether homes meet those standards through annual surveys and complaint investigations. Although CMS has issued extensive guidance to states on determining compliance with federal quality requirements, we have found that some state surveys understate quality problems at nursing homes. Federal nursing home quality standards focus on the delivery of care, resident outcomes, and facility conditions. These standards, totaling approximately 200, are grouped into 15 categories, such as Resident Rights, Quality of Life, Resident Assessment, Quality of Care, Pharmacy Services, and Administration. For example, there are 23 standards within the Quality of Care category ranging from “promote the prevention of pressure development” to “the resident environment remains as free of accident hazards as is possible.” CMS has also developed detailed investigative protocols to assist state survey agencies in determining whether nursing homes are in compliance with federal quality standards. This guidance is intended to ensure the thoroughness and consistency of state surveys and complaint investigations. Every nursing home receiving Medicare or Medicaid payment must undergo a standard state survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. During a standard survey, teams of state surveyors—generally consisting of registered nurses, social workers, dieticians, or other specialists—evaluate compliance with federal quality standards. Based on the care provided to a sample of residents, the survey team (1) determines whether the care and services provided meet the assessed needs of the residents and (2) measures resident outcomes, such as the incidence of preventable pressure sores, weight loss, and accidents. In contrast to a standard survey, a complaint investigation generally focuses on a specific allegation regarding a resident’s care or safety and provides an opportunity for state surveyors to intervene promptly if problems arise between standard surveys. Surveyors generally follow state procedures when investigating complaints, but must comply with certain federal guidelines and time frames. When deficiencies are identified, federal sanctions can be imposed to help encourage homes to correct them. Sanctions are generally reserved for serious deficiencies—those at the G through L levels—which constitute actual harm and immediate jeopardy. Sanctions for such serious quality problems can affect a home’s revenues and provide financial incentives to return to and maintain compliance. Such sanctions include fines known as civil money penalties, denial of payment for new Medicare or Medicaid admissions, or termination from the Medicare and Medicaid programs. State surveys that miss serious deficiencies or cite deficiencies at too low a scope and severity level have enforcement implications because a nursing home may escape sanctions intended to discourage repeated noncompliance. For example, facilities that receive at least one G through L level deficiency on successive standard surveys or complaint investigations must be referred for immediate sanctions. In addition, CMS guidance calls for higher fines when a home has a poor compliance history and requires that state survey teams revisit a home to verify that serious deficiencies have actually been corrected (such revisits are not required for most deficiencies cited below the actual harm level—A through F). Statutorily required federal monitoring surveys, which are conducted annually in at least 5 percent of state-surveyed Medicare and Medicaid nursing homes in each state, are a key CMS oversight tool in ensuring the adequacy of state surveys. CMS headquarters—specifically, CMS’s Survey and Certification Group—is responsible for the management of the federal monitoring survey database and for oversight of the 10 CMS regional offices’ implementation of the federal monitoring survey program. Federal surveyors located in regional offices conduct federal monitoring surveys. The surveys can be either comparative or observational, with each offering unique advantages and disadvantages as an oversight tool. For example, an advantage of comparative surveys is that they are an independent evaluation of a nursing home recently surveyed by a state survey agency team. A disadvantage is that the time lag between the two surveys can make analysis of differences difficult. Comparative survey. A federal survey team conducts an independent survey of a home recently surveyed by a state survey agency in order to compare and contrast the findings. This comparison takes place after completion of the federal survey. When federal surveyors identify a deficiency not cited by state surveyors, they assess whether the deficiency existed at the time of the state survey and should have been cited by entering either yes or no to the question, “Based on the evidence available to the , should the team have cited this ?” This assessment is critical in determining whether understatement occurred because some deficiencies cited by federal surveyors may not have existed at the time of the state survey. For example, a deficiency identified during a federal survey could involve a resident who was not in the nursing home at the time of the earlier state survey. By statute, comparative surveys must be conducted within 2 months of the completion of the state survey. However, differences in timing, resident sample selection, and staffing can make analysis of differences between the state and federal comparative surveys difficult. On the basis of our prior recommendations, CMS has taken several steps to ensure that comparative surveys more accurately capture conditions at the time of the state survey. For example, CMS now calls for the length of time between the state and federal surveys to be between 10 and 30 working days and requires federal surveyors conducting a comparative survey in a nursing home to include at least half of the state survey’s sample of residents from that nursing home in the comparative survey sample, making it easier to determine whether state surveyors missed a deficiency. Furthermore, federal comparative survey teams are expected to mimic the number of staff assigned to the state survey. CMS also issued guidance in October 2002 defining the criteria for federal surveyors to consider when selecting facilities for comparative surveys. These selection criteria can generally be categorized as state survey team performance and facility characteristics. Regional offices were given latitude in their use of these criteria and may supplement them with other selection factors unique to their regions. For example, some regions use statistics on the prevalence of pressure sores in a nursing home’s resident population as a comparative survey selection factor. Observational survey. Federal surveyors accompany a state survey team to a nursing home to evaluate the team’s on-site survey performance and ability to document survey deficiencies. State teams are evaluated in six areas—Concern Identification, Sample Selection, General Investigation, Food-Borne Illness Investigation, Medication Investigations, and Deficiency Determination—and are rated in one of five categories for each of the six measures. The rating categories—from highest to lowest—are extremely effective, very effective, satisfactory, less than satisfactory, and much less than satisfactory. CMS annual state performance reviews require that state survey teams achieve an average rating of satisfactory. Observational surveys allow federal surveyors to provide more immediate feedback to state surveyors and to identify state surveyor training needs. However, observational surveys are not independent evaluations of the state survey. Because state surveyors may perform their survey tasks more attentively than they would if federal surveyors were not present, observational surveys may not provide an accurate picture of state surveyors’ typical performance. Since 2001, CMS has also taken steps to strengthen observational surveys. For example, the agency issued written guidance defining a standard process for resolving disagreements and a new manual to increase consistency across observational surveys. The 976 federal comparative surveys conducted from fiscal year 2002 through 2007 ranged from as few as 10 in Vermont, which has about 40 facilities, to as many as 49 in California, which has about 1,300 facilities. Of the 4,023 federal observational surveys conducted during the same period, the number ranged from 16 in New Hampshire to 346 in California. The results of federal monitoring surveys, including information on the corresponding state surveys, are entered in the federal monitoring survey database. In fiscal year 2002, CMS began including information on comparative surveys in the database, and the agency began requiring federal surveyors to determine whether a deficiency cited by federal but not state surveyors had been missed by determining whether state surveyors should have cited the deficiency. Although comparative surveys and the wide variability across states in the proportion of homes with deficiencies at the actual harm and immediate jeopardy levels indicate that state surveyors miss some serious deficiencies, our prior work has also indicated that state surveyors sometimes understate the scope and severity of a deficiency. In 2003, we found widespread understatement of actual harm deficiencies in a sample of surveys from homes with a history of harming residents. Overall, 39 percent of the 76 state surveys we reviewed had documented problems that should have been classified as actual harm instead of as lower-level deficiencies. A substantial proportion of federal comparative surveys identify missed deficiencies at the potential for more than minimal harm level or above. From fiscal year 2002 through 2007, about 15 percent of federal comparative surveys nationwide identified state surveys that failed to cite at least one deficiency at the most serious levels of noncompliance—the actual harm and immediate jeopardy levels (G through L). There was wide variation across states in the proportion of comparative surveys that found at least one missed serious deficiency, from more than 25 percent in nine states to none in seven others. In contrast to missed serious deficiencies, missed deficiencies at the potential for more than minimal harm level (D through F) were considerably more widespread, with such missed deficiencies greater than 40 percent in all but five states. Every state had at least one comparative survey with missed D through F level deficiencies. At both levels of noncompliance, the most frequently missed deficiencies involved Quality of Care standards. Federal observational survey results and prior GAO reports have highlighted several factors that may contribute to the understatement of deficiencies. About 15 percent (142) of the 976 comparative surveys conducted from fiscal year 2002 through 2007 identified state surveys that missed at least one deficiency at the actual harm or immediate jeopardy level (G through L), the most serious levels of noncompliance. This proportion fluctuated from a high of 17.5 percent in fiscal year 2003 to a low of 11.1 percent in fiscal year 2004, but it has remained relatively constant at about 15 percent for the last several fiscal years (see fig. 1). This proportion is small, but CMS maintains that any missed serious deficiencies are unacceptable. From fiscal year 2002 through 2007, federal surveyors identified missed serious deficiencies in 25 percent or more of their comparative surveys in nine states. The proportion of missed serious deficiencies in these nine states ranged from 26.3 percent in Tennessee to 33.3 percent in New Mexico, South Carolina, South Dakota, and Wyoming (see table 2). The total number of missed deficiencies at the G through L levels also varied across these nine states, from a low of 4 in South Dakota to a high of 19 in South Carolina. Federal surveyors identified no missed serious deficiencies in seven states (see app. II for complete state results). In contrast to missed serious deficiencies, missed deficiencies at the potential for more than minimal harm level (D through F) were considerably more widespread on comparative surveys conducted during fiscal years 2002 through 2007. Approximately 70 percent of comparative surveys conducted nationwide identified state surveys that missed at least one deficiency at the potential for more than minimal harm level (D through F), with such missed deficiencies identified on greater than 40 percent of comparative surveys in all but five states—Alaska, Ohio, Vermont, West Virginia, and Wisconsin. On average, state surveys selected for comparative surveys failed to identify 2.5 D through F level deficiencies per survey. Undetected care problems at the D through F level are of concern because they could become more serious over time if nursing homes are not required to take corrective actions. Missed deficiencies at the potential for more than minimal harm level were not isolated to a single year during the 6 fiscal years we examined and continued to be a problem for states in fiscal year 2007. Nationally, the proportion of comparative surveys identifying at least one missed D through F level deficiency in fiscal year 2007 was about 74 percent (see fig. 2). For results by state, see appendix III. Our analysis found that the most frequently missed deficiencies at both the potential for more than minimal harm (D through F) and the actual harm or immediate jeopardy (G through L) levels occurred in quality standards under CMS’s Quality of Care category. Missed deficiencies in this category involved residents’ receipt of the necessary care and services to attain and maintain the highest practicable physical, mental, and psychosocial well- being—such as prevention of pressure sores, nutrition and hydration, accident prevention, and assistance with bathing and grooming. From fiscal year 2002 through 2007, 11.9 percent of federal comparative surveys (116) cited at least one Quality of Care deficiency at the actual harm or immediate jeopardy level that state survey teams failed to cite. These 116 surveys contained a total of 143 missed serious Quality of Care deficiencies. The category with the next highest frequency of missed serious deficiencies was Resident Behavior and Facility Practices, with only 2.2 percent of total federal comparative surveys. At the potential for more than minimal harm level, Quality of Care was one of two categories with the highest frequency of missed deficiencies—31.7 percent. For the percentage of missed deficiencies in each of the CMS quality standard categories, see appendix IV. Both federal observational surveys and our prior reports have identified factors that may contribute to the understatement of deficiencies by state survey teams. From fiscal year 2002 through 2007, 80 percent of the 4,999 federal monitoring surveys were observational. Our review of observational survey data—which are collected during direct observation of state survey teams—found that some of the lowest state survey team ratings nationwide were in the General Investigation and Deficiency Determination areas. Together, these two areas directly affect the appropriate identification and citation of deficiencies. The General Investigation segment of an observational survey evaluates the effectiveness with which the state survey team collected information to determine how the facility’s environment and care of residents affect residents’ quality of life, health and safety, and ability to reach their highest practicable physical, mental, and psychosocial well-being. This segment includes observations of state survey team actions such as collection of information, discussion of survey observations, interviews with facility residents, and implementation of CMS investigative protocols. The Deficiency Determination segment of an observational survey evaluates the skill with which the state survey teams (1) integrate and analyze all information collected and (2) use the guidance to surveyors and regulatory requirements to make accurate compliance determinations. This segment includes observations of state survey team actions such as reviews of regulatory requirements, team participation in deficiency discussions, presentation of complete information, accurate decision making, and accurate citation of deficiencies. Nationwide, 7.7 percent of the state survey teams observed by federal surveyors received below satisfactory ratings on the General Investigation measure from fiscal year 2002 through 2007. During the same 6 fiscal years, 9.2 percent, or about 1 in 11, of the state survey teams observed by federal surveyors received below satisfactory ratings on the Deficiency Determination measure. Our analysis found variation across states in survey team performance in General Investigation and Deficiency Determination. Sixteen states had more teams than the national average receive below satisfactory ratings for both measures, while 28 states had fewer teams than the national average receive below satisfactory ratings (see app. V). Poor performance on these observational survey measures may be a contributing factor to the understatement of deficiencies by state survey teams. For example, of the nine states in table 2 with the highest percentage of missed serious deficiencies on comparative surveys, six had more teams than the national average receive below satisfactory ratings for both General Investigation and Deficiency Determination (see table 3). Our prior reports have described some other factors that may contribute to survey inconsistency and the understatement of deficiencies by state survey teams: (1) weaknesses in CMS’s survey methodology, such as poor documentation of deficiencies; (2) confusion about the definition of actual harm; (3) predictability of surveys, which allows homes to conceal problems if they so desire; (4) inadequate quality assurance processes at the state level to help detect understatement in the scope and severity of deficiencies; and (5) inexperienced state surveyors as a result of retention problems. In ongoing work, we are investigating the factors that contribute to understatement. CMS has taken steps to improve the federal monitoring survey program, but weaknesses remain in program management and oversight. For example, CMS has improved processes to ensure that comparative surveys more accurately reflect conditions at the time of the state survey, has switched control of the federal monitoring survey database to the office responsible for ensuring the effectiveness of state surveys, and has begun examining how to use monitoring survey data to improve oversight. Despite this progress, the management and oversight potential of the program has not been fully realized. In particular, CMS (1) has only begun exploring options for identifying understatement that occurs in cases where state surveys cite deficiencies at too low a level, for possible implementation in fiscal year 2009, and (2) is not effectively managing the federal monitoring survey database or using the database to oversee consistent implementation of the federal monitoring survey program by its regional offices. CMS has taken steps in three areas—time between surveys, resident sample, and survey resources—to ensure that comparative surveys more accurately capture the conditions at the time of the state survey. Time between surveys. In fiscal year 2002, CMS initiated a policy that shortened the length of time between state and comparative surveys from 2 months to 1 month. CMS relaxed the 1 month standard by changing the requirement to 30 working days in fiscal year 2003. As a result of shortening the time between the two surveys, the conditions at the time of the comparative survey are more likely to reflect those at the time of the state survey; for example, the same residents are still likely to be in the nursing home. Comparative surveys during fiscal year 2007 took place on average 21.4 working days (30.9 calendar days) after state surveys. Resident sample. Beginning in fiscal year 2003, CMS policy required that comparative surveys include at least half of the residents from state survey investigative samples. Officials from several regional offices said that examining the same resident allows for more clear-cut determinations of whether the state should have cited a deficiency. Since the policy change, about 78 percent of comparative surveys from fiscal year 2003 through 2007 included at least half the residents from state surveys’ investigative samples. By comparison, only 13 percent of comparative surveys met that 50 percent threshold in fiscal year 2002, the year before the policy went into effect. Survey resources. Beginning in fiscal year 2003, CMS initiated a policy that each comparative survey should have the same number of federal surveyors as its corresponding state survey, again to more closely mirror the conditions under which the state survey was conducted. We found that in fiscal year 2007, the average state survey team (3.4 surveyors) was larger than the average federal survey team (3.0 surveyors). However, on average, federal surveyors remained on-site longer than state surveyors— 4.3 days for federal surveyors compared with 3.7 days for state surveyors. When the number of surveyors and time on-site are taken together, state surveys averaged 12.6 surveyor-days and federal comparative surveys averaged 12.9 surveyor-days. Given these improvements, we asked the regional offices how receptive state survey teams were to feedback that they had missed deficiencies. Most regional office officials told us that in general the feedback session with state surveyors on missed deficiencies was not contentious and that state surveyors generally accepted the feedback provided. However, CMS established a formal dispute resolution process for comparative surveys in October 2007. The process is similar to the process already in place for resolving disagreements about observational survey results. While CMS requires federal surveyors to determine whether a deficiency cited on a comparative but not a state survey was missed by state surveyors, there is no comparable requirement for deficiencies that are cited at different scope and severity levels. As a result, comparative surveys do not effectively capture the extent of the understatement of serious deficiencies by state surveyors. As with missed deficiencies, a discrepancy between federal and state survey results does not automatically indicate understatement. For example, the deficiency could have worsened by the time of the federal survey. Although CMS does not require federal surveyors to evaluate scope and severity differences between the two sets of surveys, we found that some regional offices used the validation question for missed deficiencies— ”based on the evidence available to the , should the team have cited this ?”—to make such a determination. Using the validation question to make these determinations is contrary to CMS guidance issued in October 2003, which instructed comparative survey teams to only answer this question when the state failed to cite the deficiency altogether. To assess whether differences in scope and severity levels were actually understated—rather than deficiencies that worsened between the state and federal surveys—we first identified all 71 deficiencies on comparative surveys conducted from fiscal year 2002 through 2007 where federal survey teams cited actual harm or immediate jeopardy deficiencies that state survey teams cited at a lower scope and severity level. We then examined the comment fields in the federal monitoring survey database associated with those deficiencies. Our analysis identified 27 deficiencies (38 percent) in which federal survey teams determined that a state’s scope and severity citation was too low. For another 22 deficiencies (31 percent), federal survey teams found that the state’s lower scope and severity determination was appropriate, given the circumstances at the time of the state survey. The remaining 22 deficiencies (31 percent) did not have comments or contained remarks that were inconclusive about whether the state deficiency citation was too low. When the confirmed scope and severity understatement was included with understatement caused by missed deficiencies, the total percentage of comparative surveys with understatement of serious deficiencies increased by an average of about 1 percentage point over the 6 fiscal years we analyzed (see fig. 3). While CMS headquarters does not require federal surveyors to determine whether a deficiency cited by state survey teams was cited at too low a scope and severity level, some regional offices have developed their own procedures to track this information and use it to provide feedback to state survey agencies. For example, in one regional office an individual reviews all comment fields for a year’s worth of comparative surveys, makes a hand count of scope and severity differences that states should have cited, and then shares this with the state survey agencies during their annual performance reviews. Because the federal monitoring survey database does not automatically collect data on scope and severity determinations, CMS headquarters does not have access to the data analyses the regions have independently conducted. Some of the regional offices told us that they would like to have a specific way that the federal monitoring survey database could track scope and severity understatement that is similar to how deficiencies missed by state surveyors are tracked. In January 2008, CMS officials told us that they had initiated a pilot program in October 2007 to test the collection of data on understatement of scope and severity differences. According to CMS, the pilot, which will run through 2008 for possible fiscal year 2009 implementation, is necessary because the agency needs to determine which scope and severity understatement differences should be captured. For example, CMS is uncertain whether regions should only focus on differences that would raise the scope and severity level to actual harm or immediate jeopardy and not assess differences for understatement that occurs at lower scope and severity levels. Our analysis found that CMS headquarters was not effectively managing the federal monitoring survey database or using the database to oversee consistent implementation of the federal monitoring survey program by regional offices. While CMS uses data from comparative and observational surveys to provide feedback to state survey agencies during state performance reviews, CMS officials told us that they recognized the need to improve their management and use of the database for better oversight of the agency’s 10 regional offices. We identified two problems in CMS’s management of the federal monitoring survey database. CMS was not aware that (1) the results of a considerable number of comparative surveys were missing from the database and (2) the validation question for missed deficiencies was being used by some regional offices to identify scope and severity differences, contrary to CMS guidance. Missing data. In October 2007, we identified missing comparative surveys for two regional offices dating back to 2005 and asked CMS to follow up with officials in those regions. At least one of the regions had completed the surveys but had failed to upload them into the national database. We also found that CMS had not included data in the federal monitoring survey database from 162 contractor-led comparative surveys conducted between fiscal years 2004 and 2007. Use of validation question contrary to CMS guidance. Some regional offices were using the missed deficiency validation question to make determinations about whether scope and severity differences constituted understatement, making it difficult to distinguish between missed deficiencies and scope and severity understatement. In addition, we found that the regional office answer to the validation question was not always consistent with the information recorded in the comment box. Similarly, we identified weaknesses in CMS’s use of the database for regional office oversight. For example, CMS was not (1) examining comparative survey data to ensure that regional offices comply with CMS guidance intended to ensure that comparative surveys more accurately capture the conditions at the time of the state survey and (2) using the database to identify inconsistencies between comparative and observational survey results. Ensuring regional office compliance. While CMS has provided guidance to its regional offices to help ensure that comparative surveys more accurately capture the conditions at the time of the state survey, the agency is not fully using available data to ensure that the regional offices implement the agency’s guidance. For example, we found that the length of time between state and comparative surveys varied broadly by CMS region. In 2007, the average time gap ranged from a low of 15.4 working days (22.5 calendar days) in the Boston region to a high of 38.5 working days (54.4 calendar days) in the New York region. Furthermore, while 78 percent of comparative surveys from fiscal year 2003 through 2007 followed CMS’s guidance to include at least half of the residents from state surveys’ investigative samples, 22 percent of comparative surveys did not meet this threshold. Finally, when we contacted officials in CMS headquarters to ask clarifying questions about the data variables needed to conduct these analyses, the headquarters officials were not familiar with a number of the variables and referred us to a CMS staff person in one of the regional offices. Together, these three examples suggest that CMS is not effectively using the data to hold regional offices accountable for implementing guidance. Identify inconsistencies between comparative and observational results. CMS officials told us that they have begun to explore regional office differences in less than satisfactory ratings for state survey teams on observational surveys. However, CMS officials told us that they do not plan to use the database to identify inconsistencies between comparative and observational surveys that may warrant follow-up to ensure that regional offices are adhering to CMS guidance and consistently assessing state surveyor performance. For example, some states that performed below the national average in identifying serious deficiencies on comparative surveys received above-average marks on observational survey measures for Deficiency Determination and General Investigation. Wyoming’s 33.3 percent rate for surveys with missed serious deficiencies was more than double the national average of about 15 percent for surveys conducted during fiscal years 2002 and 2007. Yet Wyoming never received a below satisfactory rating on its General Investigation or Deficiency Determination measures during 18 observational surveys over that same 6-year period. We found similar inconsistencies in the results of federal monitoring surveys for South Dakota and a few other states. Although inconsistencies between comparative and observational surveys may not necessarily indicate a problem, they may warrant investigation. For example, in a small state like Wyoming it is likely that comparative and observational surveys have evaluated the same group of state surveyors. Further, Wyoming and South Dakota are two of six states whose federal monitoring surveys are conducted by CMS’s Denver regional office. Of the 140 observational surveys conducted from fiscal year 2002 through 2007, federal surveyors from the Denver regional office gave one below satisfactory rating on the Deficiency Determination measure. That 0.7 percent rate of below satisfactory performance was more than four times lower than the regional office with the next-lowest percentage—the Chicago regional office—which awarded below satisfactory ratings to 3.3 percent of state survey teams it observed. With about 1 in 6 comparative surveys concluding that state survey teams had missed a serious deficiency or understated its scope and severity level, it is evident that state survey agency performance limits the federal government’s ability to obtain an accurate picture of how often nursing home residents face actual harm or are at risk of serious injury or death. These missed serious deficiencies most frequently involved Quality of Care, reflecting shortcomings in fundamental provider responsibilities such as ensuring proper nutrition and hydration, accident prevention, and preventing pressure sores. Observational survey results also underscore problems state surveyors may face in identifying facility deficiencies; about 1 in 11 state survey teams nationwide were rated as below satisfactory by CMS surveyors on the Deficiency Determination measure. We found that comparative survey data may mask the true extent of understatement because CMS’s current protocol does not require regional offices to track in the federal monitoring survey database when state surveyors cite lower-than-appropriate scope and severity levels. As we conducted our work, CMS officials recognized this problem and in October 2007 began to experiment with a pilot program to measure understated scope and severity. However, at the conclusion of the pilot, scheduled for fiscal year 2008, CMS may decide not to implement a validation question for all scope and severity differences. We believe it is important to assess differences for understatement that occurs at the D through L levels—potential for more than minimal harm, actual harm, and immediate jeopardy. We also found that CMS was not effectively managing the federal monitoring survey database to ensure that regional offices were entering data in a timely and consistent fashion. Lack of accurate and reliable data hinders effective oversight. For example, we found that the database was missing a considerable number of comparative surveys. Further, CMS has not used the federal monitoring survey database to its full potential as an oversight tool. For example, CMS is not fully using data on comparative surveys to ensure that regional offices are implementing guidance intended to improve federal monitoring surveys. Although CMS’s Survey and Certification Group assumed control of the database in January 2007, headquarters staff often referred us to CMS regional office staff to answer specific database questions, suggesting a lack of familiarity with the organization and content of the database. In addition, agency officials told us that they do not plan to follow up on inconsistencies between comparative and observational survey results that could indicate weaknesses in how regional offices evaluate state surveyors’ performance. Identifying and following up on such inconsistencies could help ensure database reliability and hold regional office officials accountable for their implementation of the federal monitoring survey program, a program required by statute. To address weaknesses in CMS’s management of the federal monitoring survey database that also affect the agency’s ability to effectively track understatement, we recommend that the Administrator of CMS take the following two actions: Require regional offices to determine if there was understatement when state surveyors cite a deficiency at a lower scope and severity level than federal surveyors do and to track this information in the federal monitoring survey database. Establish quality controls to improve the accuracy and reliability of information entered into the federal monitoring survey database. To address weaknesses that affect CMS’s ability to oversee regional office implementation of the federal monitoring survey program, we recommend that the Administrator of CMS take the following two actions: Routinely examine comparative survey data and hold regional offices accountable for implementing CMS guidance that is intended to ensure that comparative surveys more accurately capture the conditions at the time of the state survey. Regularly analyze and compare federal comparative and observational survey results. In written comments on our draft report, HHS indicated that it fully endorsed and would implement our four recommendations intended to strengthen management and oversight of the federal monitoring survey program. The comments generally outlined CMS’s implementation plan through 2009 and indicated that some steps, such as improved management of the federal monitoring survey database, are already under way. HHS’s comments are reproduced in appendix VI. The majority of HHS’s comments focused on its strategic approach to improving oversight: (1) ensuring that all nursing homes are surveyed at least once every 15 months, (2) improving surveyor understanding of federal quality requirements through improved guidance and training, (3) increasing the consistency of state surveys through the introduction of a new nursing home survey methodology, and (4) improving the use of data generated by federal monitoring surveys. Many of these strategies aim to address the underlying causes of understatement, the topic of a forthcoming GAO report. HHS also noted that limitations in the Medicare survey and certification budget underscore the agency’s need to target resources effectively to maximize results. For example, HHS indicated that the implementation of the new survey methodology will be dependent on the level of funding in the overall survey and certification budget through fiscal year 2014. Survey and Certification funding is the subject of another forthcoming GAO report. Two of HHS’s observations merit further discussion. First, HHS noted that understatement that arises from a lack of understanding or confusion about federal requirements would generally not be detected through federal monitoring surveys because both federal and state surveyors would be affected by the same limitation. We believe that the consistency with which federal surveys have identified serious deficiencies missed by state surveyors from fiscal year 2002 through 2007—about 15 percent, on average—suggests that federal surveyors have a better understanding of CMS quality requirements than do state surveyors. We have previously reported that the limited experience level of state surveyors because of the high turnover rate was a contributing factor to deficiency understatement. Second, HHS questioned our use of “one missed deficiency per survey” as a measure of understatement. We believe that this standard is appropriate for serious deficiencies that result in harm or immediate jeopardy (G through L level) because the goal of state surveys should be to identify and require nursing homes to address all such deficiencies. CMS itself uses this standard during annual state performance reviews. We also used this standard to describe the proportion of comparative surveys that identified missed deficiencies at the potential for more than minimal harm level (D through F). Identifying and requiring nursing homes to correct such deficiencies is important because if uncorrected they have the potential to become more serious. Compared to missed serious deficiencies, we found that understatement of potential for more than minimal harm deficiencies was more widespread—about 70 percent of comparative surveys identified at least one state survey with such missed deficiencies. The number of state surveys with missed deficiencies at the D through F level was greater than 40 percent in all but five states, and state surveys selected for comparative surveys failed to identify an average of 2.5 deficiencies in this range per survey. In short, the magnitude of understatement at the potential for more than minimal harm level should be a cause for concern. HHS also provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. In order to identify trends in the percentage of nursing homes cited with actual harm or immediate jeopardy deficiencies, we analyzed data from the Centers for Medicare & Medicaid Service’s (CMS) On-Line Survey, Certification, and Reporting system (OSCAR) database for fiscal years 2002 through 2007 (see table 4). Because homes must be surveyed at least every 15 months, with a required 12-month statewide average, it is possible that a home was surveyed more than once in any fiscal year. To avoid double counting homes, we included only a home’s most recent survey from each fiscal year. Because CMS conducts a relatively small number of comparative surveys, it is not possible to compare the results of comparative surveys to the results of all state surveys. In addition to the contact named above, Walter Ochinko, Assistant Director; Katherine Nicole Laubacher; Dan Lee; Elizabeth T. Morrison; Steve Robblee; Karin Wallestad; and Rachael Wojnowicz made key contributions to this report. Nursing Home Reform: Continued Attention Is Needed to Improve Quality of Care in Small but Significant Share of Homes. GAO-07-794T. Washington, D.C.: May 2, 2007. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Deaths: Arkansas Coroner Referrals Confirm Weaknesses in State and Federal Oversight of Quality of Care. GAO-05-78. Washington, D.C.: November 12, 2004. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington D.C.: July 16, 2004. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
GAO reports since 1998 have demonstrated that state surveyors, who evaluate the quality of nursing home care on behalf of CMS, sometimes understate the extent of serious care problems in homes because they miss deficiencies. CMS oversees the effectiveness of state surveys through the federal monitoring survey program. In this program, federal surveyors in CMS's regional offices either independently evaluate state surveys by resurveying a home (comparative surveys) or directly observe state surveyors during a routine nursing home survey (observational surveys). GAO was asked to evaluate the information federal monitoring surveys provide on understatement and the effectiveness of CMS management and oversight of the survey program. To do this, GAO analyzed the results of federal monitoring surveys for fiscal years 2002 through 2007, reviewed CMS guidance for the survey program, and interviewed headquarters and regional office officials. A substantial proportion of federal comparative surveys identify missed deficiencies at the potential for more than minimal harm level or above. During fiscal years 2002 through 2007, about 15 percent of federal comparative surveys nationwide identified state surveys that failed to cite at least one deficiency at the most serious levels of noncompliance--actual harm and immediate jeopardy. Overall, nine states missed serious deficiencies on 25 percent or more of comparative surveys; in seven states federal surveyors identified no such missed deficiencies. During the same period, missed deficiencies at the lowest level of noncompliance--the potential for more than minimal harm--were more widespread: nationwide, approximately 70 percent of federal comparative surveys identified state surveys missing at least one deficiency at the lowest level of noncompliance, and in all but five states the number of state surveys with such missed deficiencies was greater than 40 percent. Undetected care problems at this level are a concern because they could become more serious if nursing homes are not required to take corrective action. The most frequently missed type of deficiency on comparative surveys, at the potential for more than minimal harm level and above, was poor quality of care, such as ensuring proper nutrition and hydration and preventing pressure sores. Federal observational surveys highlighted two factors that may contribute to understatement of deficiencies: weaknesses in state surveyors' (1) investigative skills and (2) ability to integrate and analyze information collected to make an appropriate deficiency determination. These factors may contribute to understatement because they directly affect the appropriate identification and citation of deficiencies. CMS has taken steps to improve the federal monitoring survey program, but weaknesses remain in program management and oversight. For example, CMS has improved processes to ensure that comparative surveys more accurately reflect conditions at the time of the state survey, such as requiring that comparative surveys occur within 30 working days of the state survey rather than within the 2 months set in statute. Despite these improvements, the management and oversight potential of the program has not been fully realized. For example, CMS has only begun to explore options for identifying understatement that occurs in cases where state surveys cite deficiencies at too low a level, for possible implementation in fiscal year 2009. In addition, CMS is not effectively managing the federal monitoring survey database to ensure that the regional offices are entering data accurately and reliably--CMS was unaware, for example, that a considerable number of comparative surveys had not been entered. Furthermore, CMS is not using the database to oversee consistent implementation of the program by the regional offices--for example, the agency is not using the database to identify inconsistencies between comparative and observational survey results.
Both DOE and DOD have established offices, designated staff, and promulgated policies to provide a framework for the OUO and FOUO programs. However, their policies lack sufficient clarity in important areas, which could result in inconsistencies and errors. DOE policy clearly identifies the office responsible for the OUO program and establishes a mechanism to mark the FOIA exemption used as the basis for the OUO designation on a document. However, our analysis of DOD’s FOUO policies shows that it is unclear which DOD office is responsible for the FOUO program, and whether personnel designating a document as FOUO should note the FOIA exemption used as the basis for the designation on the document. Also, both DOE’s and DOD’s policies are unclear regarding at what point a document should be marked as OUO or FOUO, and what would be an inappropriate use of the OUO or FOUO designation. In our view, this lack of clarity exists in both DOE and DOD because the agencies have put greater emphasis on managing classified information, which is more sensitive than OUO or FOUO information. DOE’s Office of Security issued an order, a manual, and a guide in April 2003 to detail the requirements and responsibilities for DOE’s OUO program and to provide instructions for identifying, marking, and protecting OUO information. DOE’s order established the OUO program and laid out, in general terms, how sensitive information should be identified and marked, and who is responsible for doing so. The guide and the manual supplement the order. The guide provides more detailed information on the applicable FOIA exemptions to help staff decide whether exemption(s) may apply, which exemption(s) may apply, or both. The manual provides specific instructions for managing OUO information, such as mandatory procedures and processes for properly identifying and marking this information. For example, the employee marking a document is required to place on the front page of the document an OUO stamp that has a space for the employee to identify which FOIA exemption is believed to apply; the employee’s name and organization; the date; and, if applicable, any guidance the employee may have used in making this determination. According to one senior DOE official, requiring the employee to cite a reason why a document is designated as OUO is one of the purposes of the stamp, and one means by which DOE’s Office of Classification encourages practices consistent with the order, guide, and manual throughout DOE. Figure 1 shows the DOE OUO stamp. With regard to DOD, its regulations are unclear regarding which DOD office controls the FOUO program. Although responsibility for the FOUO program shifted from the Director for Administration and Management to the Office of the Assistant Secretary of Defense, Command, Control, Communications, and Intelligence (now the Under Secretary of Defense, Intelligence) in October 1998, this shift is not reflected in current regulations. Guidance for DOD’s FOUO program continues to be included in regulations issued by both offices. As a result, which DOD office has primary responsibility for the FOUO program is unclear. According to a DOD official, on occasion this lack of clarity causes personnel who have FOUO questions to contact the wrong office. A DOD official said that the department began coordination of a revised Information Security regulation covering the FOUO program at the end of January 2006. The new regulation will reflect the change in responsibilities and place greater emphasis on the management of the FOUO program. DOD currently has two regulations, issued by each of the offices described above, containing similar guidance that addresses how unclassified but sensitive information should be identified, marked, handled, and stored. Once information in a document has been identified as for official use only, it is to be marked FOUO. However, unlike DOE, DOD has no departmentwide requirement to indicate which FOIA exemption may apply to the information, except when it has been determined to be releasable to a federal governmental entity outside of DOD. We found, however, that one of the Army’s subordinate commands does train its personnel to put an exemption on any documents that are marked as FOUO, but does not have this step as a requirement in any policy. In our view, if DOD were to require employees to take the extra step of marking the exemption that may be the reason for the FOUO designation at the time of document creation, it would help assure that the employee marking the document had at least considered the exemptions and made a thoughtful determination that the information fit within the framework of the FOUO designation. Including the FOIA exemption on the document at the time it is marked would also facilitate better agency oversight of the FOUO program, since it would provide any reviewer/inspector with an indication of the basis for the marking. In addition, both DOE’s and DOD’s policies are unclear as to the point at which the OUO or FOUO designation should actually be affixed to a document. If a document might contain information that is OUO or FOUO but it is not so marked when it is first created, the risk that the document could be mishandled increases. DOE policy is vague about the appropriate time to apply a marking. DOE officials in the Office of Classification stated that their policy does not provide specific guidance about at what point to mark a document because such decisions are highly situational. Instead, according to these officials, the DOE policy relies on the “good judgment” of DOE personnel in deciding the appropriate time to mark a document. Similarly, DOD’s current Information Security regulation addressing the FOUO program does not identify at what point a document should be marked. In contrast, DOD’s September 1998 FOIA regulation, in a chapter on FOUO, states that “the marking of records at the time of their creation provides notice of FOUO content and facilitates review when a record is requested under the FOIA.” In our view, a policy can provide flexibility to address highly situational circumstances and also provide specific guidance and examples of how to properly exercise this flexibility. In addition, we found that both DOE’s and DOD’s OUO and FOUO programs lack clear language identifying examples of inappropriate usage of OUO or FOUO markings. Without such language, DOE and DOD cannot be confident that their personnel will not use these markings to conceal mismanagement, inefficiencies, or administrative errors, or to prevent embarrassment to themselves or their agency. While both DOE and DOD offer training to staff on managing OUO and FOUO information, neither agency requires any training of its employees before they are allowed to identify and mark information as OUO or FOUO, although some staff will eventually take OUO or FOUO training as part of other mandatory training. In addition, neither agency has implemented an oversight program to determine the extent to which employees are complying with established policies and procedures. While many DOE units offer training on DOE’s OUO policy, DOE does not have a departmentwide policy that requires OUO training before an employee is allowed to designate a document as OUO. As a result, some DOE employees may be identifying and marking documents for restriction from dissemination to the public or persons who do not need to know the information to perform their jobs, and yet may not be fully informed as to when it is appropriate to do so. At DOE, the level of training that employees receive is not systematic and varies considerably by unit, with some requiring OUO training at some point as a component of other periodic employee training, and others having no requirements at all. DOD similarly has no departmentwide training requirements before staff are authorized to identify, mark, and protect information as FOUO. The department relies on the individual services and field activities within DOD to determine the extent of training that employees receive. When training is provided, it is usually included as part of a unit’s overall security training, which is required for many but not all employees. There is no requirement to track which employees received FOUO training, nor is there a requirement for periodic refresher training. Some DOD components, however, do provide FOUO training for employees as part of their security awareness training. Neither DOE nor DOD knows the level of compliance with OUO and FOUO program policies and procedures because neither agency conducts any oversight to determine whether the OUO and FOUO programs are being managed well. According to a senior manager in DOE’s Office of Classification, the agency does not review OUO documents to assess whether they are properly identified and marked. This condition appears to contradict the DOE policy requiring the agency’s senior officials to assure that the OUO programs, policies, and procedures are effectively implemented. Similarly, DOD does not routinely conduct oversight of its FOUO program to assure that it is properly managed. Without oversight, neither DOE nor DOD can assure that staff are complying with agency policies. We are aware of at least one recent case in which DOE’s OUO policies were not followed. In 2005, several stories appeared in the news about revised estimates of the cost and length of the cleanup of high-level radioactive waste at DOE’s Hanford Site in southeastern Washington. This information was controversial because this multibillion-dollar project has a history of delays and cost overruns, and DOE was restricting a key document containing recently revised cost and time estimates from being released to the public. This document, which was produced by the U.S. Army Corps of Engineers for DOE, was marked Business Sensitive by DOE. However, according to a senior official in the DOE Office of Classification, Business Sensitive is not a recognized marking in DOE. Therefore, there is no DOE policy or guidance on how to handle or protect documents marked with this designation. This official said that if information in this document needed to be restricted from release to the public, then the document should have been stamped OUO and the appropriate FOIA exemption should have been marked on the document. In closing, the lack of clear policies, effective training, and oversight in DOE’s and DOD’s OUO and FOUO programs could result in both over- and underprotection of unclassified yet sensitive government documents. Having clear policies and procedures in place can mitigate the risk of program mismanagement and can help DOE and DOD management assure that OUO or FOUO information is appropriately marked and handled. DOE and DOD have no systemic procedures in place to assure that staff are adequately trained before designating documents OUO or FOUO, nor do they have any means of knowing the extent to which established policies and procedures for making these designations are being complied with. These issues are important because they affect DOE’s and DOD’s ability to assure that the OUO and FOUO programs are identifying, marking, and safeguarding documents that truly need to be protected in order to prevent potential damage to governmental, commercial, or private interests. Mr. Chairman, this concludes GAO’s prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact either Davi M. D’Agostino at (202) 512-5431 or dagostinod@gao.gov, or Gene Aloise at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Ann Borseth, David Keefer, Kevin Tarmann, and Ned Woodward. Routine internal personnel matters such as performance standards and leave practices; internal matters the disclosure of which would risk the circumvention of a statute or agency regulation, such as law enforcement manuals 3. Specifically exempted from disclosure by federal statute Nuclear weapons design (Atomic Energy Act); tax return information (Internal Revenue Code) Scientific and manufacturing processes (trade secrets); sales statistics, customer and supplier lists, profit and loss data, and overhead and operating costs (commercial/financial information) Memoranda and other documents that contain advice, opinions, or recommendations on decisions and policies (deliberative process); documents prepared by an attorney in contemplation of litigation (attorney work-product); confidential communications between an attorney and a client (attorney-client)
In the interest of national security and personal privacy and for other reasons, federal agencies place dissemination restrictions on information that is unclassified yet still sensitive. The Department of Energy (DOE) and the Department of Defense (DOD) have both issued policy guidance on how and when to protect sensitive information. DOE marks documents with this information as Official Use Only (OUO) while DOD uses the designation For Official Use Only (FOUO). GAO was asked to (1) identify and assess the policies, procedures, and criteria DOE and DOD employ to manage OUO and FOUO information; and (2) determine the extent to which DOE's and DOD's training and oversight programs assure that information is identified, marked, and protected according to established criteria. As GAO reported earlier this month, both DOE and DOD base their programs on the premise that information designated as OUO or FOUO must (1) have the potential to cause foreseeable harm to governmental, commercial, or private interests if disseminated to the public or persons who do not need the information to perform their jobs; and (2) fall under at least one of eight Freedom of Information Act (FOIA) exemptions. While DOE and DOD have policies in place to manage their OUO or FOUO programs, our analysis of these policies showed a lack of clarity in key areas that could allow inconsistencies and errors to occur. For example, it is unclear which DOD office is responsible for the FOUO program, and whether personnel designating a document as FOUO should note the FOIA exemption used as the basis for the designation on the document. Also, both DOE's and DOD's policies are unclear regarding at what point a document should be marked as OUO or FOUO and what would be an inappropriate use of the OUO or FOUO designation. For example, OUO or FOUO designations should not be used to conceal agency mismanagement. In our view, this lack of clarity exists in both DOE and DOD because the agencies have put greater emphasis on managing classified information, which is more sensitive than OUO or FOUO. In addition, while both DOE and DOD offer training on their OUO and FOUO policies, neither DOE nor DOD has an agencywide requirement that employees be trained before they designate documents as OUO or FOUO. Moreover, neither agency conducts oversight to assure that information is appropriately identified and marked as OUO or FOUO. DOE and DOD officials told us that limited resources, and in the case of DOE, the newness of the program, have contributed to the lack of training requirements and oversight. Nonetheless, the lack of training requirements and oversight of the OUO and FOUO programs leaves DOE and DOD officials unable to assure that OUO and FOUO documents are marked and handled in a manner consistent with agency policies and may result in inconsistencies and errors in the application of the programs.
Beginning in 1930, the Congress established the first transportation discretionary program under which the executive branch could select specific transportation projects for federal funding, thus providing the executive branch with some latitude in allocating federal funds to the states. In that year, the Public Lands Program was established to pay for road work on the nation’s public lands. In 1978, the Congress set up the Discretionary Bridge and Discretionary Interstate programs. The Discretionary Bridge Program was established to replace or rehabilitate high-cost bridges while the Discretionary Interstate Program aimed to accelerate the construction of the Interstate Highway System. When the Interstate 4R Discretionary Program was begun in 1982, its goal was to resurface, restore, rehabilitate, and reconstruct the Interstate Highway System. Finally, the Ferry Boats and Facilities Program, begun in 1991, was intended to construct ferry boats and ferry terminal facilities. (See apps. I-V for additional information on ISTEA’s provisions and eligibility requirements for each of the discretionary programs discussed in this report.) The Secretary of Transportation is responsible for selecting projects under the discretionary programs. The Secretary has delegated this responsibility to the FHWA Administrator. FHWA’s Office of Engineering administers the programs, solicits applications from states, and compiles the applications and information for selection. States submit applications to FHWA’s division offices, which either send the applications to FHWA’s regional offices for compilation with other states’ applications or to FHWA’s headquarters, as they do for Interstate Discretionary and 4R programs. Regional offices then send the applications to FHWA’s Washington, D.C., headquarters. As table 1 shows, since fiscal year 1992, the five discretionary programs we reviewed received over $2.7 billion in federal funds—ranging from about $99 million provided for the Ferry Boats and Facilities Program to almost $1.6 billion provided for the Interstate Discretionary Program. The states’ requests for discretionary funds have generally exceeded the amounts available for funding. For example, in fiscal year 1997, the states submitted almost $682 million in requests for about $61 million in discretionary bridge funds. Similarly, the states submitted nearly $1.3 billion in requests for about $66 million available in fiscal year 1997 from Interstate 4R funds. Although the states have been able to build specific transportation projects and facilities through discretionary funding, the amounts available through these five programs represent only a small portion of the highway funds that the states receive annually. For example, in fiscal year 1997, FHWA’s Office of the Administrator provided the states with $213.7 million in discretionary funding for the five programs we reviewed—about 1 percent of the estimated $20 billion that FHWA provided the states with for that year. FHWA uses a two-phase selection process involving both program staff and the Office of the Administrator. As displayed in figure 1, the first phase begins when program staff in the Office of Engineering send a solicitation notice out to FHWA’s regions calling for project candidates from the states. Project candidates are compiled by the regions and divisions and forwarded to the headquarters staff offices. The headquarters staff with specific expertise in the various discretionary program areas compile the regional submissions and review and analyze each candidate. By applying program-specific statutory and administrative criteria, program staff screen and prioritize project applications for each discretionary program. FHWA program staff use both statutory and administrative criteria and other factors to identify and prioritize projects for each discretionary program. As displayed in table 2, the criteria are program specific and differ among the five discretionary programs. For example, by using data to determine the physical condition of a bridge, Discretionary Bridge Program staff calculate a numerical score or rating factor for each candidate project. The rating factor allows FHWA to prioritize the projects eligible for funding. In contrast, Public Lands Program staff use factors, such as whether a project is located in a state with 3 percent or more of the nation’s public lands, to screen and prioritize projects. Interstate Discretionary and 4R Program staff consider whether other sources of funding, such as unobligated Interstate Construction and National Highway System funds, are available for projects, while Ferry Boats and Facilities Program staff evaluate factors, such as a project’s benefits and the ability of federal funds to leverage private funds, when they prioritize the candidate projects. After applying the criteria, program staff identify ineligible projects and group the eligible projects into four priority categories—most promising, promising, qualified, and not qualified. The second phase of the selection process begins after FHWA’s Office of Engineering provides the Office of the Administrator with a report for each discretionary program that lists the eligible projects and includes other information, such as what funding the states received in prior years through the discretionary programs and any congressional interest or support for specific projects. The Office of the Administrator receives the reports for all of the programs simultaneously—generally in the fall—and begins the final selection process at that time. Using the discretion granted to the Administrator under each of the programs, the Office of the Administrator can apply additional considerations to the information received from the program staff before making final selections. For example, according to FHWA’s acting Deputy Administrator, the Office of the Administrator may try to spread the discretionary funds among as many states as possible to ensure geographic equity. As a result, the final selections could differ from the staffs’ groupings if these groupings provided the bulk of funding for only a few states in a given program. The official also said that the Office of the Administrator, in consultation with the Secretary, also pays close attention to requests from Members of Congress in making the final project selections. In addition, congressional legislation, in the form of earmarks, may require the Secretary to fund a particular project through a discretionary program. Other considerations may include whether the requesters would be willing to accept partial funding, whether conditions pertaining to the project have changed since the state originally requested funding, and other factors, such as whether states have received an equitable share of funding over time. The process is concluded when the Office of the Administrator sends out the list of its selections. The two phases differ in the extent of documentation available to explain how decisions are made—that is, how program staff apply the criteria to group the projects and how the Office of the Administrator makes the final selections. Specifically, program staff provide written documentation of the statutory and administrative criteria used to screen candidate projects, as well as the project’s descriptions, the amounts of funding requested, prior allocations to states, and congressional interest. In contrast, officials in the Office of the Administrator could verbally describe the additional factors they have considered in making the final project selections but could not provide any written documentation that either explained the additional selection factors they used or how they applied these factors to each candidate project. In addition, the final selection list that the Office of the Administrator issues at the conclusion of the selection process does not explain or justify why each project was selected. The current process to select projects under the discretionary programs has been in place since fiscal year 1995. In general, under the current process, the Office of the Administrator relies more on its discretion in making the selections than it did under an earlier process that relied almost entirely on program staffs’ analyses and recommendations. We found that under the current process, there were differences in the projects selected by the Office of the Administrator and those given higher priority by the FHWA staff responsible for evaluating and prioritizing the projects. Our analysis of the discretionary program’s selection process revealed that two distinct processes existed during our review time frames. During fiscal years 1992-94, program staff for each of the discretionary programs we reviewed recommended specific projects and funding amounts after ranking projects in order of priority. As displayed in figure 2, the Office of the Administrator selected over 98 percent of all the projects that the program staff recommended in fiscal years 1992-94 (180 projects selected from 183 staff recommendations). For example, the Office of the Administrator selected 15 of the 17 public lands projects recommended by FHWA program staff in fiscal year 1992. Similarly, the Office of the Administrator selected all of the six Interstate 4R projects recommended by FHWA program staff for funding in fiscal year 1992. The high level of consistency between the staffs’ recommendations and the Office of the Administrator’s selections changed after fiscal year 1994. For the fiscal year 1995 funding cycle, the Office of the Administrator directed the program staff to group the projects into four categories—most promising, promising, qualified, and not qualified—rather than provide specific project and funding recommendations. According to FHWA’s Acting Deputy Administrator, the Office of the Administrator found it difficult to make equitable decisions using the rank order lists and thus made the change to the groupings. The official also said that the process did not give the Office of the Administrator enough flexibility to take into account other equally important considerations, such as the geographic distribution of funding. As figure 2 shows, beginning in fiscal year 1995, the Office of the Administrator selected a declining proportion of projects from the higher-priority categories. In fiscal year 1995, 92 percent of the projects that the Office of the Administrator selected were projects that staff had categorized as “promising” or “most promising”; 69 percent in fiscal 1996; and 59 percent in fiscal 1997—about 73 percent overall for the 3-year period. This tendency was particularly evident in comparing the staffs’ groupings and the final project selections in three of the five programs we reviewed—Public Lands, Interstate 4R, and Discretionary Bridge. For example, in fiscal year 1997, while the Office of the Administrator selected 9 of the 36 public lands projects categorized as “most promising,” it also selected 16 of 59 projects categorized as “qualified”—the lowest-priority category for the public lands program that year. Half of all public lands projects that the Office of the Administrator selected that year came from the qualified category. In the Ferry Boats and Facilities Program there was an initial decline from 100 to 80 percent for fiscal year 1995, but the selection of most promising and promising projects remained around 80 percent for fiscal 1996-97. For the other discretionary program we reviewed—Interstate Discretionary—the projects that the program staff categorized as “higher priority” were the ones selected by the Office of the Administrator. The results occurred under the selection procedures in place during fiscal years 1992-97. The Office of the Administrator selected 100 percent of the projects recommended by the program staff during fiscal years 1992-94 and 100 percent of those that the staff placed in the most promising category during fiscal 1995-97. The change in the selection process did not affect the Interstate Discretionary Program because the funds available for the program during fiscal years 1993-97 exceeded the requests; all eligible candidate projects were selected for full funding. (Details on the Office of the Administrator’s selections for fiscal years 1992-97 are provided in apps. I-V for the five programs we reviewed.) FHWA officials stated that the selection of projects in lower-priority categories did not mean that the Office of the Administrator was making poor transportation investments. FHWA officials stated that most of the states’ project submissions could be considered good projects, given the tremendous Interstate infrastructure needs and tight federal-aid funding. Therefore, they believed it would be unlikely that the Office of the Administrator could select a poor project for funding. In addition, our analyses revealed no instance where the Office of the Administrator selected a project that staff had classified as statutorily ineligible project during fiscal years 1995-97. We provided DOT with draft copies of this report for review and comment. We met with DOT officials—including the FHWA Acting Administrator, Acting Deputy Administrator, and Associate Administrator for Program Development—to discuss their comments. DOT did not have any comments on the report’s overall findings but requested that the report be modified to address two general areas of concern. First, the Department indicated that it wanted us to broaden our discussion of the additional factors that the Office of the Administrator uses to make final project selections. Second, the Department suggested that we modify our analysis of the project selection statistics to account for the financial limitations of the programs. DOT said that because several of the discretionary requests greatly exceeded the available funding, the promising and most promising categories contained more projects than could be selected. DOT noted that because our analyses did not take this funding limitation into account, our analyses implied that the Office of the Administrator chose many projects from the lower-priority categories. In response to DOT’s comments, we included additional information on the factors that the Office of the Administrator considers in making project selections in the body of the report. We revised our analysis of the projects the Office of the Administrator selected for funding in fiscal years 1995-97 to address DOT’s second concern. We also included additional information in the appendixes on the funds allocated to projects in each category for fiscal years 1995-97. Other editorial comments provided by the Department have been incorporated where appropriate. To identify the criteria and selection process that DOT used to select projects for discretionary funding, we obtained from FHWA officials information that documented the process followed for each program during fiscal years 1992-97. The information from FHWA described the number and dollar amount of projects submitted for funding, the statutory and administrative criteria for eligibility and selection, and the Office of the Administrator’s final project selections. Some of the documents also included information about congressional interest and whether the Congress had earmarked any of the program dollars. We discussed the programs with officials in FHWA’s Office of Engineering who had knowledge of each program, as well as with FHWA’s Acting Deputy Administrator, who summarized the factors used by the Office of the Administrator to determine which projects would receive funding. We also met with officials in one FHWA regional office to discuss their role in the discretionary programs. To determine how changes in the process have affected which projects DOT selects for discretionary funding, we examined the staffs’ analysis of the projects submitted for funding in each program, as well as the Office of the Administrator’s subsequent project selections. We then compared the project selections resulting from the process that the Department currently uses with the project selections resulting from the process used during fiscal years 1992-94. We performed our review from July through September 1997 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Transportation, the Acting Administrator, FHWA; cognizant congressional committees; and other interested parties. We will make copies available to others upon request. Please call me at (202) 512-2834 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Public Lands Highways. The Public Lands Program was initially established in 1930. The Federal-Aid Highway Act of 1970 changed the funding source for the program from the general fund to the Highway Trust fund, effective in fiscal year 1972. The funding level for public lands highways was $16 million per year during fiscal years 1972-82. The Surface Transportation Assistance Act of 1982 (1982 STAA, P.L. 97-424) increased the annual authorization level to $50 million for fiscal years 1983-86, but the Surface Transportation and Uniform Relocation Assistance Act of 1987 (1987 STURAA, P.L. 100-17) reduced this amount to $40 million for fiscal 1987-91. The program funds projects that are within, adjacent to, or provide access to the areas served by public lands highways—such as roads in national parks, forests, or Indian reservations. Public Lands Highways funds may be used on eligible public lands highways, defined by the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) (P.L. 102-240) as (1) a forest road or (2) any highway through unappropriated or unreserved public lands; nontaxable Indian lands; or other federal reservations that are under the jurisdiction of and maintained by a public authority and open to public travel. A variety of activities are eligible, including planning, research, engineering, and construction. Projects ranging from reconstructing a road to adding parking facilities are eligible. One hundred percent. ISTEA authorized the following funding levels for the Public Land Highways Program: fiscal year 1992—$48.62 million; fiscal 1993—$58.14 million; fiscal 1994—$58.14 million; fiscal 1995—$58.14 million; fiscal 1996—$58.48 million; and fiscal 1997—$58.48 million. Funds remain available for the fiscal year allocated plus 3 years. See table I.1. During fiscal years 1992-97, the top five recipient states received about 37 percent of all Public Lands Highway allocations. (See table I.2.) During fiscal years 1992-94, FHWA’s Office of the Administrator selected 97 percent of the projects that the FHWA staff recommended for funding. (See table I.3.) During fiscal years 1995-97, FHWA’s Office of the Administrator selected a declining percentage of projects grouped in the most promising and promising categories. (See table I.4.) During fiscal years 1995-97, the Office of the Administrator allocated a declining proportion of dollars to projects the staff categorized as “most promising” or “promising.” (See table I.5.) Discretionary Bridge. The Discretionary Bridge Program was established by the Surface Transportation Assistance Act of 1978 (1978 STAA, P.L. 95-599). The 1978 legislation required that $200 million be withheld from the Highway Bridge Replacement and Rehabilitation Program apportionment for each of fiscal years 1979-82 to be used by the Secretary of Transportation as a discretionary fund to replace or rehabilitate bridges that cost more than $10 million each or twice the state’s apportionment. The Surface Transportation Assistance Act of 1982 continued the program at the same funding level through fiscal year 1986. The act also provided that the Federal Highway Administration (FHWA) establish a formal process to rank and select discretionary bridge projects for funding. The act also decreed that discretionary bridge projects be on a federal-aid highway system. The Surface Transportation and Uniform Relocation Assistance Act of 1987 increased the discretionary set-aside to $225 million for each fiscal year during 1987-91. Eligible projects are bridge rehabilitation or replacement projects that cost more than $10 million or at least twice the amount of Highway Bridge Replacement and Rehabilitation Program funds apportioned to the state in which the bridge is located. The discretionary bridge projects must be on a federal-aid system. Candidate bridges must have a rating factor of 100 or less to be eligible, unless they were selected prior to November 1983. Eighty percent. Bridge projects that are on the Interstate Highway System and have been selected for discretionary bridge funding will receive funding at 50 percent of the requested amount primarily because other Interstate discretionary funds are available. ISTEA continued the program and authorized that $349.5 million be set aside over the 6-year period fiscal 1992-97—$49 million for fiscal 1992, $59.5 million each for fiscal 1993-94, and $60.5 million each for fiscal 1995-97. See table II.1. During fiscal years 1992-97, the top five recipient states received about 69 percent of all Discretionary Bridge allocations. (See table II.2.) During fiscal years 1992-94, the Office of the Administrator selected 100 percent of the projects that the FHWA program staff recommended for funding. (See table II.3.) During fiscal years 1995-97, FHWA’s Office of the Administrator selected a declining percentage of the projects the program staff had categorized as “most promising” and “promising.” (See table II.4.) During fiscal years 1995-97, the Office of the Administrator allocated about $126 million to projects the staff had categorized as either “most promising” or “promising.” (See table II.5.) Interstate Discretionary Program. Originally created by section 115(a) of the Surface Transportation Assistance Act of 1978 in order to accelerate construction of the Interstate Highway System. The Surface Transportation Assistance Act of 1982 and the Surface Transportation and Uniform Relocation and Assistance Act of 1987 both continued and modified the Interstate Discretionary Program. Interstate Discretionary funds may be used for the same purpose as Interstate Construction funds—initial construction of remaining portions of the Interstate System. However, only work eligible under the provisions of the Federal-Aid Highway Act of 1981 and included in the 1981 Interstate Cost Estimate is eligible for Interstate Discretionary funding. Ninety percent; 80 percent for projects that provide additional capacity, unless added capacity is High Occupancy Vehicle or auxiliary lane (also 90-percent federal share). Section 1020 of ISTEA revised 23 U.S.C. 118(b), (c) & (d) and authorized a $100 million per year set-aside from the Interstate Construction Program for the Interstate Discretionary Program annually for fiscal years 1992-96.FHWA also provided Interstate Discretionary funds from lapsed Interstate Construction funds that had reached the end of their availability period. The funds are available until expended. Currently, there is a balance of approximately $61 million available for future Interstate Discretionary allocations. See table III.1. During fiscal years 1992-97, the top five recipient states received about 72 percent of all Interstate Discretionary allocations. (See table III.2.) During fiscal years 1992-94, FHWA’s Office of the Administrator selected 100 percent of the projects that FHWA staff recommended for funding. (See table III.3.) During fiscal years 1995-97, FHWA’s Office of the Administrator also selected all of the eligible projects for full funding, since the amount available for allocation exceeded the amount requested. (See table III.4.) During fiscal years 1995-97, the staff categorized all eligible projects as “most promising,” and all eligible projects were funded. (See table III.5.) Interstate 4R Discretionary Program. Originally created by section 115 (a) of the Surface Transportation Assistance Act of 1982. Funds were provided for the program from lapsed Interstate 4R apportionments, with additional criteria. The Surface Transportation and Uniform Relocation and Assistance Act of 1987 provided for a $200 million set-aside for each of the fiscal years 1988-92 from the Interstate 4R authorization for the continuation of the Interstate 4R discretionary fund and provided criteria/factors to be used in the distribution of funds. Interstate 4R discretionary funds may be used for resurfacing, restoring, rehabilitating, and reconstructing the Interstate System, including providing additional capacity. Ninety percent. The federal share may be increased up to 95 percent in states with large areas of public lands and up to 100 percent for safety, traffic control, and car/vanpool projects. Section 1020 of ISTEA amended 23 U.S.C. 118 (c)(2) and set aside $54 million for fiscal year 1992, $64 million each year for fiscal 1993-96, and $65 million for fiscal 1997. Of the amounts set aside, ISTEA required that $16 million for fiscal year 1992 and $17 million for each of fiscal 1993 and 1994 be used for improvements on the Kennedy Expressway in Chicago, Illinois. ISTEA terminated the apportioned Interstate 4R Fund Program and provided that the Interstate 4R set-aside come from the National Highway System program. See table IV.1. During fiscal years 1992-97, the top five recipient states received about 70 percent of all Interstate 4R Discretionary allocations. (See table IV.2.) During fiscal years 1992-94, FHWA’s Office of the Administrator selected 94 percent of the projects that FHWA staff recommended for funding. (See table IV.3.) During fiscal years 1995-97, about 76 percent of the projects that FHWA’s Office of the Administrator selected for funding were those the staff grouped in the most promising and promising categories. (See table IV.4.) During fiscal years 1995-97, the Office of the Administrator allocated $173 million to projects that the staff had categorized as “most promising” or “promising.” (See table IV.5.) Ferry Boats and Ferry Terminal Facilities. In 1991, ISTEA created a discretionary funding program for the construction of ferry boats and ferry terminal facilities and authorized funding from the Highway Trust Fund. Ferry boats and facilities must be publicly owned. The operation of the ferry facilities must be on a route classified as a public road, except an Interstate route. Eighty percent. ISTEA authorized $100 million—$14 million in fiscal year 1992, $17 million each year for fiscal 1993-96, and $18 million for fiscal 1997. Funds are available until expended. See table V.1. During fiscal years 1992-97, the top five recipient states received about 50 percent of all Ferry Boats and Facilities allocations. (See table V.2.) During fiscal years 1992-94, FHWA’s Office of the Administrator selected 100 percent of the projects that FHWA staff recommended for funding. (See table V.3.) During fiscal years 1995-97, about 81 percent of the projects that FHWA’s Office of the Administrator selected were those that the staff had grouped in the most promising and promising categories. (See table V.4.) During fiscal years 1995-97, the FHWA Office of the Administrator allocated about $40 million to projects the staff had categorized as “most promising” or “promising.” (See table V.5.) Joseph A. Christoff Bonnie Pignatiello Leer David I. Lichtenfeld Gail F. Marnik Jonathan Pingle Phyllis F. Scheinberg The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Transportation's (DOT) project selection process for five federal discretionary programs, focusing on: (1) the selection process and criteria DOT uses to fund projects under its discretionary highway programs; and (2) how DOT's process has changed and how the changes may have affected which projects DOT selects for discretionary funding. GAO noted that: (1) DOT uses a two-phase process for selecting and funding transportation projects for the five discretionary programs GAO reviewed; (2) in the first phase, Federal Highway Administration (FHwA) program staff in the field and headquarters compile and evaluate the applications that states submit for discretionary funding; (3) program staff screen the applications by applying eligibility criteria established by statute or administratively; (4) on the basis of written criteria, program staff group the projects into four categories that range in priority from most promising to not qualified and submit the groupings to the Office of the Administrator; (5) the submission to the Office of the Administrator provides information on each candidate project, data on discretionary funds that each state received during prior years, and the current level of congressional interest; (6) in the second phase, the Office of the Administrator uses the information submitted by the program staff, as well as other factors, to evaluate the projects and make the final selections; (7) according to FHwA's Acting Deputy Administrator, the Office of the Administrator has tried to ensure that the dollars are spread fairly among all the states and that the interests of members of Congress are addressed; (8) in contrast to the analyses that the program staff complete in the first phase, the Office of the Administrator does not document the factors it uses to select the final projects or its rationale for making the final selections; (9) DOT is authorized to establish procedures for reviewing and selecting transportation projects under its discretionary programs; (10) GAO's review of the selection process revealed that under the current process, in place during fiscal years 1995-1997, the Office of the Administrator relied more on its discretion and less on the program staff's input and analyses than it did under an earlier process used during fiscal years 1992-1994; (11) under this new process, which was designed to provide the Office of the Administrator with more flexibility in taking into account items such as the geographic distribution of funding, 73 percent of the projects that the Office of the Administrator selected were categorized as "most promising" or "promising"; (12) the Office of the Administrator selected a declining portion of projects from these categories over the 3-year period; and (13) during fiscal years 1992-94, when staff ranked projects in order of priority and recommended specific projects and funding amounts, the Office of the Administrator selected over 98 percent of all projects that the program staff recommended.
The Schedules of Federal Debt including the accompanying notes present fairly, in all material respects, in conformity with U.S. generally accepted accounting principles, the balances as of September 30, 2004, 2003, and 2002, for Federal Debt Managed by BPD; the related Accrued Interest Payables and Net Unamortized Premiums and Discounts; and the related increases and decreases for the fiscal years ended September 30, 2004 and 2003. BPD maintained, in all material respects, effective internal control relevant to the Schedule of Federal Debt related to financial reporting and compliance with applicable laws and regulations as of September 30, 2004, that provided reasonable assurance that misstatements, losses, or noncompliance material in relation to the Schedule of Federal Debt would be prevented or detected on a timely basis. Our opinion is based on criteria established under 31 U.S.C. § 3512 (c), (d) (commonly referred to as the Federal Managers’ Financial Integrity Act) and the Office of Management and Budget (OMB) Circular A-123, revised June 21, 1995, Management Accountability and Control. We found matters involving information security controls that we do not consider to be reportable conditions. We will communicate these matters to BPD’s management, along with our recommendations for improvement, in a separate letter to be issued at a later date. Our tests for compliance in fiscal year 2004 with the statutory debt limit disclosed no instances of noncompliance that would be reportable under U.S. generally accepted government auditing standards or OMB audit guidance. However, the objective of our audit of the Schedule of Federal Debt for the fiscal year ended September 30, 2004, was not to provide an opinion on overall compliance with laws and regulations. Accordingly, we do not express such an opinion. BPD’s Overview on Federal Debt Managed by the Bureau of the Public Debt contains information, some of which is not directly related to the Schedules of Federal Debt. We do not express an opinion on this information. However, we compared this information for consistency with the schedules and discussed the methods of measurement and presentation with BPD officials. Based on this limited work, we found no material inconsistencies with the schedules. Management is responsible for the following: preparing the Schedules of Federal Debt in conformity with U.S. generally accepted accounting principles; establishing, maintaining, and assessing internal control to provide reasonable assurance that the broad control objectives of the Federal Managers’ Financial Integrity Act are met; and complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the Schedules of Federal Debt are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles and (2) management maintained effective related internal control as of September 30, 2004, the objectives of which are the following: Financial reporting: Transactions are properly recorded, processed, and summarized to permit the preparation of the Schedule of Federal Debt for the fiscal year ended September 30, 2004, in conformity with U.S. generally accepted accounting principles. Compliance with laws and regulations: Transactions related to the Schedule of Federal Debt for the fiscal year ended September 30, 2004, are executed in accordance with laws governing the use of budget authority and with other laws and regulations that could have a direct and material effect on the Schedule of Federal Debt. We are also responsible for testing compliance with selected provisions of laws and regulations that have a direct and material effect on the Schedule of Federal Debt. Further, we are responsible for performing limited procedures with respect to certain other information appearing with the Schedules of Federal Debt. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the Schedules of Federal Debt; assessed the accounting principles used and any significant estimates evaluated the overall presentation of the Schedules of Federal Debt; obtained an understanding of internal control relevant to the Schedule of Federal Debt as of September 30, 2004, related to financial reporting and compliance with laws and regulations (including execution of transactions in accordance with budget authority); tested relevant internal controls over financial reporting and compliance, and evaluated the design and operating effectiveness of internal control related to the Schedule of Federal Debt as of September 30, 2004; considered the process for evaluating and reporting on internal control and financial management systems under the Federal Managers’ Financial Integrity Act; and tested compliance in fiscal year 2004 with the statutory debt limit (31 U.S.C. § 3101(b), as amended by Pub. L. No. 107-199, § 1, 116 Stat. 734 (2002) and Pub. L. No. 108-24, 117 Stat. 710 (2003)). We did not evaluate all internal controls relevant to operating objectives as broadly described by the Federal Managers’ Financial Integrity Act, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to controls over financial reporting and compliance. Because of inherent limitations in internal control, misstatements due to error or fraud, losses, or noncompliance may nevertheless occur and not be detected. We also caution that projecting our evaluation to future periods is subject to the risk that controls may become inadequate because of changes in conditions or that the degree of compliance with controls may deteriorate. We did not test compliance with all laws and regulations applicable to BPD. We limited our tests of compliance to a selected provision of a law that has a direct and material effect on the Schedule of Federal Debt for the fiscal year ended September 30, 2004. We caution that noncompliance may occur and not be detected by these tests and that such testing may not be sufficient for other purposes. We performed our work in accordance with U.S. generally accepted government auditing standards and applicable OMB audit guidance. In commenting on a draft of this report, BPD concurred with the facts and conclusions in our report. The comments are reprinted in appendix I. Federal debt managed by the Bureau of the Public Debt comprises debt held by the public and debt held by certain federal government accounts, the latter of which is referred to as intragovernmental debt holdings. As of September 30, 2004 and 2003, outstanding gross federal debt managed by the bureau totaled $7,379 and $6,783 billion, respectively. The increase in gross federal debt of $596 billion during fiscal year 2004 was due to an increase in gross intragovernmental debt holdings of $213 billion and an increase in gross debt held by the public of $383 billion. As Figure 1 illustrates, intragovernmental debt holdings have steadily increased since fiscal year 2000 and debt held by the public decreased in fiscal year 2001, but increased in fiscal years 2002 through 2004. The primary reason for the increases in intragovernmental debt holdings is the annual surpluses in the Federal Old-Age and Survivors Insurance Trust Fund, Civil Service Retirement and Disability Trust Fund, Federal Hospital Insurance Trust Fund, Federal Disability Insurance Trust Fund, and Military Retirement Fund. The fiscal years 2002 through 2004 increases in debt held by the public are due primarily to total federal spending exceeding total federal revenues. As of September 30, 2004, gross debt held by the public totaled $4,307 billion and gross intragovernmental debt holdings totaled $3,072 billion. Total Gross Federal Debt Outstanding (in billions) Interest expense incurred during fiscal year 2004 consists of (1) interest accrued and paid on debt held by the public or credited to accounts holding intragovernmental debt during the fiscal year, (2) interest accrued during the fiscal year, but not yet paid on debt held by the public or credited to accounts holding intragovernmental debt, and (3) net amortization of premiums and discounts. The primary components of interest expense are interest paid on the debt held by the public and interest credited to federal government trust funds and other federal government accounts that hold Treasury securities. The interest paid on the debt held by the public affects the current spending of the federal government and represents the burden in servicing its debt (i.e., payments to outside creditors). Interest credited to federal government trust funds and other federal government accounts, on the other hand, does not result in an immediate outlay of the federal government because one part of the government pays the interest and another part receives it. However, this interest represents a claim on future budgetary resources and hence an obligation on future taxpayers. This interest, when reinvested by the trust funds and other federal government accounts, is included in the programs’ excess funds not currently needed in operations, which are invested in federal securities. During fiscal year 2004, interest expense incurred totaled $322 billion, interest expense on debt held by the public was $158 billion, and $164 billion was interest incurred for intragovernmental debt holdings. As Figure 2 illustrates, total interest expense decreased each year from fiscal year 2001 through 2003, but increased in fiscal year 2004. Average interest rates on principal balances outstanding as of fiscal year end are disclosed in the Notes to the Schedules of Federal Debt. (in billions) Debt held by the public reflects how much of the nation’s wealth has been absorbed by the federal government to finance prior federal spending in excess of total federal revenues. As of September 30, 2004 and 2003, gross debt held by the public totaled $4,307 billion and $3,924 billion, respectively (see Figure 1), an increase of $383 billion. The borrowings and repayments of debt held by the public increased from fiscal year 2003 to 2004 primarily due to Treasury’s decision to finance current operations using more short-term securities. As of September 30, 2004, $3,846 billion, or 89 percent, of the securities that constitute debt held by the public were marketable, meaning that once the government issues them, they can be resold by whoever owns them. Marketable debt is made up of Treasury bills, Treasury notes, Treasury bonds, and Treasury Inflation-Protected Securities (TIPS) with maturity dates ranging from less than 1 year out to 30 years. Of the marketable securities currently held by the public as of September 30, 2004, $2,576 billion or 67 percent will mature within the next 4 years (see Figure 3). As of September 30, 2004 and 2003, notes and TIPS held by the public maturing within the next 10 years totaled $2,274 billion and $1,919 billion, respectively, an increase of $355 billion. The government also issues to the public, state and local governments, and foreign governments and central banks nonmarketable securities, which cannot be resold, and have maturity dates from on demand to more than 10 years. As of September 30, 2004, nonmarketable securities totaled $461 billion, or 11 percent of debt held by the public. As of that date, nonmarketable securities primarily consisted of savings securities totaling $204 billion and special securities for state and local governments totaling $158 billion. Intragovernmental debt holdings represent balances of Treasury securities held by over 200 individual federal government accounts with either the authority or the requirement to invest excess receipts in special U.S. Treasury securities that are guaranteed for principal and interest by the full faith and credit of the U.S. Government. Intragovernmental debt holdings primarily consist of balances in the Social Security, Medicare, Military Retirement, and Civil Service Retirement and Disability trust funds. As of September 30, 2004, such funds accounted for $2,726 billion, or 89 percent, of the $3,072 billion intragovernmental debt holdings balances (see Figure 4). As of September 30, 2004 and 2003, gross intragovernmental debt holdings totaled $3,072 billion and $2,859 billion, respectively (see Figure 1), an increase of $213 The majority of intragovernmental debt holdings are Government Account Series (GAS) securities. GAS securities consist of par value securities and market-based securities, with terms ranging from on demand out to 30 years. Par value securities are issued and redeemed at par (100 percent of the face value), regardless of current market conditions. Market-based securities, however, can be issued at a premium or discount and are redeemed at par value on the maturity date or at market value if redeemed before the maturity date. Significant Events in FY 2004 Increased Issuance of Treasury Inflation-Protected Securities (TIPS) During fiscal year 2004, Treasury increased the issuances of inflation-indexed securities known as Treasury Inflation-Protected Securities (TIPS). In the May 5, 2004, Quarterly Refunding Statement, Treasury introduced both a 5-year TIPS and a 20-year TIPS. Each security will be issued semiannually with the 5-year TIPS issued in April and October and the 20-year TIPS issued in January and July. A 10-year TIPS will continue to be issued in January, April, July and October. The first 20-year TIPS was issued on July 30, 2004, in the amount of $11 billion and will be reopened in January 2005 and July 2005. Similarly, the 5-year TIPS will be issued in late October 2004, and will be reissued in April 2005 and October 2005. In the February 2004 Quarterly Refunding Statement, Treasury announced its intentions to compute price awards in auctions to six decimal places per hundred. Calculating prices to six decimals ensures price uniqueness for all discount rates or yields bid in all marketable Treasury securities auctions. This change also makes Treasury’s pricing practice consistent with secondary market practices. In addition, the August 2004 Quarterly Refunding Statement announced that the limit on non-competitive Treasury auction awards will be $5 million for all auctions. The previous limit on non-competitive awards, in effect since 1991, was $1 million for bill auctions and $5 million for coupon auctions. These changes were effective on September 20, The Uniform Offering Circular (UOC), in conjunction with the announcement for each auction, provides the terms and conditions for the auction and issuance of marketable Treasury securities to the public. Treasury has rewritten the UOC in plain language because the wide variety of bidders in our securities auctions – broker-dealers, depository institutions, non- financial firms, individuals, etc. – have widely different levels of experience in dealing with federal regulations in general and with securities-related concepts and regulations in particular. On July 28, 2004, the rewritten UOC was published in the Federal Register. Treasury believes that a better understanding of the auction rules may increase direct participation in our auctions and improve the auction process overall, resulting in lower borrowing costs. Significant Events in FY 2004, cont. Series HH Savings Bonds Discontinued As announced on February 18, 2004, the final Series HH Savings Bonds were issued to the public on August 31, 2004. These bonds are current-income securities that pay interest to their owners semiannually. Series HH Savings Bonds issued through August 2004 will continue to earn interest until they reach final maturity 20 years after issue. Treasury withdrew the offering of HH bonds due to the high cost in relation to the relatively small value of transactions. These bonds were initially issued in 1980 and were available in exchange for Series E or EE Savings Bonds. After August 31, 2004, holders of eligible E and/or EE bonds may choose to accept the proceeds of maturing issues or invest them in marketable bills, notes, or TIPS at auction, or they may choose to purchase Series EE or I Savings Bonds. EasySaver Program To Be Discontinued EasySaver was introduced in November 1998 to encourage savings and broaden access to Treasury securities. Customers enrolled in EasySaver could buy U.S. Savings Bonds through authorized debits to their bank accounts thus allowing employees of smaller businesses to have access to a regular savings plan like the millions of Americans who buy bonds through payroll savings plans at work. Since some of the same EasySaver features and even more features are available in TreasuryDirect (www.treasurydirect.gov), the Bureau of the Public Debt stopped accepting EasySaver enrollments on December 31, 2003. All existing automatic EasySaver debits will be discontinued in March 2005. Customers are encouraged to enroll in TreasuryDirect to buy, manage, and redeem their securities online. TreasuryDirect represents a direct relationship between customers — individuals and financial professionals alike — and the U.S. Treasury. The TreasuryDirect website (www.treasurydirect.gov) provides complete information for people who want to buy securities directly from the Treasury — in a way that's easy for everyone to understand. Electronic purchasers can use TreasuryDirect to schedule electronic Series EE and I Savings Bond purchases. TreasuryDirect enhanced its Internet website capability during fiscal year 2004. One new feature offered is a payroll deduction option to purchase a Zero-Percent Certificate of Indebtedness (C of I). While these securities do not earn any interest, they can be used as a source of funds to purchase savings bonds within a TreasuryDirect account after accumulating a minimum of $25 in the C of I or by scheduling a purchase in advance. Customers may purchase electronic Series EE and I Savings Bonds ranging in value from $25 - $30,000 in increments as low as $0.01. There is a $30,000 annual limit per series per person. Customers may redeem a bond one year after purchase, or hold the bond to maturity. Significant Events in FY 2004, cont. Depositary Compensation Securities Redeemed, Replaced By Appropriation In July 2003, Treasury began issuing Depositary Compensation Securities (DCS), a non-marketable security, to compensate those financial institutions serving as financial agents of the United States for essential banking services provided to the Government. These securities were issued to financial agents in an amount sufficient to generate interest payments equal to the monthly expenses for financial agent services provided to Treasury. As part of the fiscal year 2004 Treasury appropriation, a permanent and indefinite appropriation was enacted to reimburse financial institutions serving as financial agents for the United States. As a result, more than $16 billion of DCS were called on February 29, 2004. Statutory Debt Limit Being Evaluated As a consequence of the changes in the government’s financing needs, resulting in part from the current economic environment, Treasury Secretary John Snow urged the Congress on August 2, 2004, to raise the debt limit. Delays in raising the current debt limit of $7,384 billion have forced Treasury to enter into a debt issuance suspension period (DISP) that involves Treasury’s departure from its normal investment and redemption procedures for certain federal government accounts. Consistent with legal authorities available to the Secretary, on October 14, 2004, Treasury suspended sales of State and Local Government series (SLGS) nonmarketable securities and was unable to fully invest contributions in the Government Securities Investment Fund (G-Fund) of the federal employee 401(k) retirement plan. Federal debt outstanding is one of the largest legally binding obligations of the federal government. Nearly all the federal debt has been issued by the Treasury with a small portion being issued by other federal government agencies. Treasury issues debt securities for two principal reasons, (1) to borrow needed funds to finance the current operations of the federal government and (2) to provide an investment and accounting mechanism for certain federal government accounts’ excess receipts, primarily trust funds. Total gross federal debt outstanding has dramatically increased over the past 25 years from $827 billion as of September 30, 1979 to $7,379 billion as of September 30, 2004 (see Figure 5). Large budget deficits emerged during the 1980’s due to tax policy decisions and increased outlays for defense and domestic programs. Through fiscal year 1997, annual federal deficits continued to be large and debt continued to grow at a rapid pace. As a result, total federal debt increased more than five fold between 1980 and 1997. Historical Perspective, cont. However, by fiscal year 1998, federal debt held by the public was beginning to decline. In fiscal years 1998 through 2001, the amount of debt held by the public fell by $476 billion, from $3,815 billion to $3,339 billion. As a consequence of the changes in the federal government’s financing needs, resulting from increased federal outlays, tax policy decisions, and the deterioration of overall economic performance, from fiscal year 2001 to 2004 debt held by the public rose by $968 billion, from $3,339 billion to $4,307 billion. Even in those years where debt held by the public declined, total federal debt increased because of increases in intragovernmental debt holdings. Over the past 4 fiscal years, intragovernmental debt holdings increased by $852 billion, from $2,220 billion as of September 30, 2000, to $3,072 billion as of September 30, 2004. By law, trust funds have the authority or are required to invest surpluses in federal securities. As a result, the intragovernmental debt holdings balances primarily represent the cumulative surplus of funds due to the trust funds’ cumulative annual excess of tax receipts, interest credited, and other collections compared to spending. As shown in Figure 6, interest rates have fluctuated over the past 25 years. The average interest rates reflected here represent the original issue weighted effective yield on securities outstanding at the end of the fiscal year. Average Interest Rates of Federal Debt Outstanding (Unaudited) Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2004 and 2003 (Note 2) (Note 3) (Discounts) (Discounts) (39,945) (10,362) Accrued Interest (Note 4) (10,362) Net Amortization (Note 4) (13,461) (13,461) (36,846) (312) (10,667) Accrued Interest (Note 4) (10,667) Net Amortization (Note 4) (12,735) (12,735) ($34,778) ($589) The accompanying notes are an integral part of these schedules. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2004 and 2003 Note 1. Significant Accounting Policies The Schedules of Federal Debt Managed by the Bureau of the Public Debt (BPD) have been prepared to report fiscal year 2004 and 2003 balances and activity relating to monies borrowed from the public and certain federal government accounts to fund the U.S. government's operations. Permanent, indefinite appropriations are available for the payment of interest on the federal debt and the redemption of Treasury securities. The Constitution empowers the Congress to borrow money on the credit of the United States. The Congress has authorized the Secretary of the Treasury to borrow monies to operate the federal government within a statutory debt limit. Title 31 U.S.C. authorizes Treasury to prescribe the debt instruments and otherwise limit and restrict the amount and composition of the debt. BPD, an organizational entity within the Fiscal Service of the Department of the Treasury, is responsible for issuing Treasury securities in accordance with such authority and to account for the resulting debt. In addition, BPD has been given the responsibility to issue Treasury securities to trust funds for trust fund receipts not needed for current benefits and expenses. BPD issues and redeems Treasury securities for the trust funds based on data provided by program agencies and other Treasury entities. The schedules were prepared in conformity with U.S. generally accepted accounting principles and from BPD's automated accounting system, Public Debt Accounting and Reporting System. Interest costs are recorded as expenses when incurred, instead of when paid. Certain Treasury securities are issued at a discount or premium. These discounts and premiums are amortized over the term of the security using an interest method for all long term securities and the straight line method for short term securities. The Department of the Treasury also issues Treasury Inflation-Protected Securities (TIPS). The principal for TIPS is adjusted over the life of the security based on the Consumer Price Index for all Urban Consumers. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2004 and 2003 Note 2. Federal Debt Held by the Public As of September 30, 2004 and 2003, Federal Debt Held by the Public consisted of the following: Total Federal Debt Held by the Public Treasury issues marketable bills at a discount and pays the par amount of the security upon maturity. The average interest rate on Treasury bills represents the original issue effective yield on securities outstanding as of September 30, 2004 and 2003, respectively. Treasury bills are issued with a term of one year or less. Treasury issues marketable notes and bonds as long-term securities that pay semi-annual interest based on the securities' stated interest rate. These securities are issued at either par value or at an amount that reflects a discount or a premium. The average interest rate on marketable notes and bonds represents the stated interest rate adjusted by any discount or premium on securities outstanding as of September 30, 2004 and 2003. Treasury notes are issued with a term of 2 – 10 years and Treasury bonds are issued with a term of more than 10 years. Treasury also issues TIPS that have interest and redemption payments, which are tied to the Consumer Price Index, the leading measurement of inflation. TIPS are issued with a term of more than 5 years. At maturity, TIPS are redeemed at the inflation-adjusted principal amount, or the original par value, whichever is greater. TIPS pay a semi-annual fixed rate of interest applied to the inflation-adjusted principal. Inflation-indexed securities, TIPS, were previously included with Treasury notes and bonds. The fiscal year 2003 amounts and average interest rates have been reclassified to conform with the presentation adopted in fiscal year 2004. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2004 and 2003 Note 2. Federal Debt Held by the Public (continued) As of September 30, 2004, nonmarketable securities primarily consisted of $204,246 million in U.S. Savings Securities, $158,214 million in securities issued to State and Local Governments, $5,881 million in Foreign Series Securities, and $29,995 million in Domestic Series Securities. As of September 30, 2003, nonmarketable securities primarily consisted of $201,606 million in U.S. Savings Securities, $148,366 million in securities issued to State and Local Governments, $11,007 million in Foreign Series Securities, $29,995 million in Domestic Series Securities, and $14,991 million in Depositary Compensation Securities. Treasury issues nonmarketable securities at either par value or at an amount that reflects a discount or a premium. The average interest rate on the nonmarketable securities represents the original issue weighted effective yield on securities outstanding as of September 30, 2004 and 2003. Nonmarketable securities are issued with a term of on demand to more than 10 Government Account Series (GAS) securities are nonmarketable securities issued to federal government accounts. Federal Debt Held by the Public includes GAS securities issued to certain federal government accounts. One example is the GAS securities held by the Government Securities Investment Fund (G-Fund) of the federal employees’ Thrift Savings Plan. Federal employees and retirees who have individual accounts own the GAS securities held by the fund. For this reason, these securities are considered part of the Federal Debt Held by the Public rather than Intragovernmental Debt Holdings. The GAS securities held by the G-Fund consist of overnight investments redeemed one business day after their issue. The net increase in amounts borrowed from the fund during fiscal years 2004 and 2003 are included in the respective Borrowings from the Public amounts reported on the Schedules of Federal Debt. Federal Debt Held by the Public includes federal debt held outside of the U. S. government by individuals, corporations, Federal Reserve Banks (FRB), state and local governments, and foreign governments and central banks. The FRB owned $698 billion and $654 billion of Federal Debt Held by the Public as of September 30, 2004 and 2003, respectively. These securities are held in the FRB System Open Market Account (SOMA) for the purpose of conducting monetary policy. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2004 and 2003 Note 3. Intragovernmental Debt Holdings As of September 30, 2004 and 2003, Intragovernmental Debt Holdings are owed to the following: Federal Old-Age and Survivors Insurance Trust Fund Civil Service Retirement and Disability Fund Federal Hospital Insurance Trust Fund Federal Disability Insurance Trust Fund Employees' Life Insurance Fund Federal Supplementary Medical Insurance Trust Fund Pension Benefit Guaranty Corporation Fund Foreign Service Retirement & Disability Fund Savings Association Insurance Fund (SAIF) Treasury: Exchange Stabilization Fund Airport & Airway Trust Fund These amounts include marketable Treasury securities as well as GAS securities as follows: As of September 30, 2004: Civil Service Retirement and Disability Fund Federal Disability Insurance Trust Fund As of September 30, 2003: Civil Service Retirement and Disability Fund Federal Disability Insurance Trust Fund Social Security Administration (SSA); Office of Personnel Management (OPM); Department of Health and Human Services (HHS); Department of Defense (DOD); Department of Labor (DOL); Federal Deposit Insurance Corporation (FDIC); Department of Energy (DOE); Department of Housing and Urban Development (HUD); Department of Transportation (DOT); Department of State (DOS); Department of Veterans Affairs (VA); Department of the Treasury (Treasury). Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2004 and 2003 Note 3. Intragovernmental Debt Holdings (continued) Intragovernmental Debt Holdings primarily consist of GAS securities. Treasury issues GAS securities at either par value or at an amount that reflects a discount or a premium. The average interest rates for fiscal years 2004 and 2003 were 5.4 percent and 5.5 percent, respectively. The average interest rate represents the original issue weighted effective yield on securities outstanding as of September 30, 2004 and 2003. GAS securities are issued with a term of on demand to 30 years. Note 4. Interest Expense Interest expense on Federal Debt Managed by BPD for fiscal years 2004 and 2003 consisted of Federal Debt Held by the Public Net Amortization of Premiums and Discounts Total Interest Expense on Federal Debt Held by the Public Net Amortization of Premiums and Discounts (3,717) (6,670) Total Interest Expense on Intragovernmental Debt Total Interest Expense on Federal Debt Managed by BPD Note 5. Fund Balance With Treasury The Fund Balance with Treasury, a non-entity, intragovernmental account, is not included on the Schedules of Federal Debt and is presented for informational purposes. In addition to the individual named above, Erik A. Braun, Dean D. Carpenter, Dennis L. Clarke, Chau L. Dinh, Mickie E. Gray, Jennifer L. Hall, Jay McTigue, Lori B. Ryza, Kathryn J. Peterson, and Jason O. Strange made key contributions to this report.
GAO is required to audit the consolidated financial statements of the U.S. government. Due to the significance of the federal debt held by the public to the governmentwide financial statements, GAO has also been auditing the Bureau of the Public Debt's (BPD) Schedules of Federal Debt annually. The audit of these schedules is done to determine whether, in all material respects, (1) the schedules prepared are reliable, (2) BPD management maintained effective internal control relevant to the Schedule of Federal Debt, and (3) BPD complies with selected provisions of significant laws related to the Schedule of Federal Debt. Federal debt managed by BPD consists of Treasury securities held by the public and by certain federal government accounts, referred to as intragovernmental debt holdings. The level of debt held by the public reflects how much of the nation's wealth has been absorbed by the federal government to finance prior federal spending in excess of total federal revenues. Intragovernmental debt holdings represent balances of Treasury securities held by federal government accounts, primarily federal trust funds such as Social Security, that typically have an obligation to invest their excess annual receipts over disbursements in federal securities. In GAO's opinion, BPD's Schedules of Federal Debt for fiscal years 2004 and 2003 were fairly presented in all material respects and BPD maintained effective internal control related to the Schedule of Federal Debt as of September 30, 2004. GAO also found no instances of noncompliance in fiscal year 2004 with the statutory debt limit. A s of September 30, 2004 and 2003, federal debt managed by BPD totaled about $7,379 billion and $6,783 billion, respectively. At the end of fiscal year 2004, debt held by the public as a percentage of the U.S. economy is estimated at 37.5 percent, up from 33.1 percent at the end of fiscal year 2001. Further, certain trust funds (e.g., Social Security) continue to run surpluses, resulting in increased intragovernmental debt holdings. These debt holdings are backed by the full faith and credit of the U.S. government and represent a priority call on future budgetary resources. Gross federal debt has increased 27 percent between the end of fiscal years 2001 and 2004. As a result of the increasing federal debt, on October 14, 2004, Treasury entered into a debt issuance suspension period to avoid exceeding the current $7,384 billion statutory debt limit and requested the Congress take action to raise the debt limit by mid- November 2004. The total federal debt increased over each of the last 4 fiscal years. Debt held by the public decreased as a result of cash surpluses for fiscal year 2001, but increased during fiscal years 2002 through 2004, with the return of annual unified budget deficits. Intragovernmental debt holdings steadily increased during this 4-year period primarily due to excess receipts over disbursements in federal trust funds.
Small businesses, such as startup companies, may choose to raise capital through a nonpublic offering, also called a private placement, when their capital needs exceed what might be available through other channels (such as bank loans, personal credit cards, or loans from friends and family). Public companies also may raise capital using private placements to benefit from lowered transaction costs and raise funds more quickly than in the public market. Private placements generally involve issuers, placement agents or finders, qualifying accredited investors, and regulators. Issuers. Issuers include private and public companies, and pooled investment vehicles such as hedge, venture capital, or private equity funds. Placement agent or finders. Issuers may employ placement agents (who are registered with SEC, the Financial Industry Regulatory Authority, or states) and finders (who are unregistered with regulators) to access accredited or other qualified investors. Qualifying accredited investors. Qualifying accredited investors can be individuals or institutions. Some individual accredited investors, called angel investors, invest in early-stage start-up companies with high growth potential. Other qualifying accredited investors, such as venture capital funds that invest in start-ups at later stages of development, are institutional accredited investors. Regulators. SEC and state regulators have some oversight over the offering and sale of private placement securities. SEC generally limits its enforcement activity to fraud and does not conduct reviews of offering materials or verify accredited investor status. State regulators play a role in investigating potential fraud in private placements and enforcement after the securities have been sold in their state for certain exempted securities. Regulation D defines accredited investors and describes certain terms and conditions of offers and sales of private placements securities. Under Regulation D, accredited investors consist of institutional entities and individuals. Institutions that qualify as accredited investors include certain regulated financial institutions and certain entities with total assets of $5 million or more. Individuals who qualify as accredited investors are directors, executive officers, and general partners of the securities issuers, or persons who have a minimum annual income in excess of $200,000 ($300,000 with spouse) or net worth in excess of $1 million excluding primary residence (see fig. 1). The focus of this report is on individual accredited investors. Regulation D establishes registration exemptions, with the exemptions differing in the size of offerings, the number and type of investors, or both. These exemptions are described in Rules 504, 505, 506. Under Rule 504, the maximum dollar amount of the offering is $1 million in any 12-month period and no limit is set on the number of investors. Under Rule 505, the maximum offering is $5 million in any 12-month period and unlimited number of accredited investors and up to 35 nonaccredited investors may invest. Under Rule 506, there are no offering dollar limits and an unlimited number of accredited investors and up to 35 nonaccredited sophisticated investors may invest. Offerings made in reliance on the rules cannot solicit investors through any form of general solicitation or general advertising. SEC staff has indicated that an issuer would not contravene the prohibition against general solicitation if the issuer had a preexisting and substantive relationship with the investor. Of the three rules, Rule 506 is the most widely used exemption because there is no limit on the size of the offering or the number of accredited investors. Further, according to a GAO report, complying with state securities registration requirements for other exemptions may play a role as to why issuers rely more on Regulation D. SEC data show that businesses make more Regulation D offerings than registered public offerings (see table 1). Regulation D is designed to (1) eliminate any unnecessary restrictions SEC rules place on small business issuers, and (2) achieve uniformity between state and federal exemptions to facilitate capital formation consistent with investor protection. The accredited investor standard, which is used to adjust exemptive conditions under Regulation D, has undergone several changes since its adoption in 1982. As shown in figure 2, the qualifications for investment in private placements securities went through several interpretations and changes. For example, in 1953 the Supreme Court ruled that the applicability of registration exemptions depended on whether the purchasers of securities could “fend for themselves.” In 1974, SEC adopted Rule 146, a predecessor to Rule 506, which stated that investors were financially sophisticated enough to be offered or purchase private placement securities based on their knowledge and experience, capability of evaluating the risk and merits, and ability to bear the economic risk of investment. In 1982, Regulation D—the goals of which included facilitating capital formation and promoting investor protection—replaced previous rules, including Rule 146. The income and net worth thresholds established in Regulation D were intended to serve as proxies for financial experience, sophistication, and adequate bargaining power. Since 1982, SEC twice revised the individual accredited investor standard: it added the $300,000 threshold for joint income in 1988, and excluded the value of primary residence as part of the net worth calculation in 2011 as required by the Dodd-Frank Act. In recent years, market participants and a regulator have proposed several changes to the accredited investor standard for individuals. The proposals were generally based on concerns for investor protection and capital formation, and consist of adjustments to income and net worth and alternative ways of measuring investor knowledge. For instance, two consumer advocacy groups, an investment company association, SEC, and North American Securities Administrators Association (NASAA) have expressed concerns that the number of those eligible to be accredited investors has grown significantly since the thresholds were established. According to SEC, when the standard was first created, 1.87 percent of households qualified as accredited investors. SEC staff estimate that 9.04 percent of households would have qualified as accredited investors under the net worth standard in 2007; we estimate that removing the primary residence from households’ net worth, as required in the Dodd-Frank Act, dropped the percentage to 7.2 percent (based on 2010 data). SEC and NASAA say that because private placement offerings are generally illiquid, complex, and not reviewed by state or federal regulators, the risks of investing in private placement offerings are high and a higher level of protection may be necessary to protect accredited investors. In contrast, the angel investor community has expressed concerns about how any proposed changes could affect capital formation for start-up companies. Noting concerns about investor protection, SEC proposed several changes to Regulation D during the 2000s. In December 2006, SEC proposed a new standard for individuals with the goal of providing additional protection to accredited investors investing in certain pooled investment funds, including hedge funds. An investor would have to meet either the existing income or net worth criteria and also own at least $2.5 million in investments at the time of the purchase of the securities. SEC stated that with this proposal, it sought to ascertain that an individual likely would have sufficient knowledge and experience to evaluate the merit and risk of certain private placement investments or be able to hire someone able to do so. In their comment letters on this proposal, some market participants suggested that SEC adjust the income and net worth standards for inflation, while others suggested these standards be lowered or eliminated. Some commentators suggested that SEC use a percentage of the net worth standard for certain investments. Others agreed with the proposal but made suggestions such as decreasing the required investments owned. This proposal was never finalized because, according to SEC officials, it received competing demands and mixed comments during the comment period. In July 2007, SEC proposed other changes to align its rules with modern market practices and communications technologies without compromising investor protection. First, SEC proposed to create a new exemption for offers and sales for a new category of investors to be designated as “large accredited investors.” Under this standard, individuals would have to own $2.5 million in investments or have an annual income of $400,000 (or $600,000 for married couples). Second, SEC proposed an alternative criterion for the accredited investor standard (to include $750,000 of investments owned) and adjusting the dollar-amount thresholds for investments owned, income, and net worth to reflect future inflation. According to SEC, it did not finalize its proposals due to, as stated before, competing demands and the mixed comments it received during the comment period. There are pending changes to Regulation D as a result of the passage of the Dodd-Frank Act and the Jumpstart Our Business Startups Act (JOBS Act). First, Section 926 of the Dodd-Frank Act directs SEC to adopt a rule that would make the Rule 506 exemption unavailable to securities offerings in which “felons or other bad actors” were involved. SEC proposed a rule in 2011 defining bad actors and prohibiting them from using the Rule 506 exemptions if they have been convicted of, or are subject to court or administrative sanctions for securities fraud or other violations of specific laws. The SEC commission adopted the final rule in July 2013 which prohibits bad actors, which include the issuer, certain issuer, officers, significant owners, general partners or managing members of the issuer, whose actions occurred after the rule’s effective date from using Rule 506. Actions which occurred before the effective date must be disclosed but do not act as a prohibition against using Rule 506. Second, the Dodd-Frank Act also requires SEC to review the “accredited investor” definition in its entirety beginning in 2014 and every 4 years thereafter and to conduct rulemaking as SEC deems appropriate after each review. As a result, criteria within the standard could change. According to SEC officials, they are conducting their review of the entire definition in 2014, after we publish our report on market participants’ views relating to existing and alternative criteria for qualifying as accredited. Third, the JOBS Act directs SEC to implement a rule that would permit companies to use general solicitation and advertising to offer securities under Rule 506 of Regulation D. In August 2012, SEC proposed a rule permitting companies to advertise and market such securities offerings provided that the issuer takes reasonable steps to verify the purchaser’s accredited investor status. Currently, Rule 506 does not specifically require that the issuer verify that accredited investors meet the requirements. The comment period to the proposed rule closed in October 2012. On July 10, 2013, the SEC adopted the final rule which permits issuers using Rule 506 to utilize general solicitation and advertising when all the purchasers are accredited investors. Among existing criteria, market participants whom we interviewed said a net worth criterion was most important for balancing the goals of protecting investors and facilitating capital formation. While market participants offered varying viewpoints on how adjusting the existing thresholds for net worth and income would impact investor protection, they tended to agree on how adjustments would impact capital formation. In particular, many argued that increasing the thresholds would limit capital formation. Our analysis of Federal Reserve consumer finance data indicates that increasing the existing thresholds would contribute to limiting the pool of the eligible investors. Market participants said adjusting the existing net worth and income criteria is feasible, but market acceptance would depend on the extent of the adjustments. Seventeen of 27 market participants said that net worth was the most important criterion for balancing investor protection and capital formation. Nine of the 17 market participants said that net worth alone was the key criterion for balancing investor protection and capital formation. The other 8 participants stated that net worth with other criteria was important to balance investor protection and capital formation. Two market participants told us that net worth demonstrated an investor’s ability to accumulate wealth over time. One market participant also thought it indicated an investor’s understanding of financial risk and having experience with investment decisions. A few market participants and an industry expert said that a high net worth provided a cushion against risky investments—that is, having enough assets to bear the loss of the investment should a start-up company fail. However, representatives from an industry association and a broker-dealer firm cautioned that net worth is not easily identifiable or verifiable because they have to rely on information from the investor. For example, net liabilities—a component of net worth—are a complex measure to calculate and net worth can be subject to manipulation. Of the two existing qualifying criteria, most market participants did not think income was the best criterion to balance investor protection and capital formation. One of the 27 market participants identified income alone as the most important criteria. However, one market participant said that it was more probable that an investor’s income could decline compared with net worth, and net worth tended to be a better indicator than income of an investor’s knowledge of the markets. For example, a wealthy professional would not necessarily have a guarantee that his or her income would remain the same or have the investment experience or financial wherewithal, depending on his or her profession, to invest in private placements. Nonetheless, an athlete or a doctor, for example, will qualify under the income standard, but it is unclear whether he or she could afford to hold the private placement for an indefinite period and whether he or she could afford a complete loss. Market participants had differing opinions on whether the thresholds for the existing criteria should be adjusted, citing various advantages and disadvantages associated with balancing investor protection and capital formation. Some market participants (13 of 27) said the thresholds should be adjusted, two believed that the current thresholds should be increased, but differed on how much they should be increased. To better protect investors, some participants thought the levels should be increased to adjust for inflation—thus focusing on the original subset of the population that was targeted when the standards were first implemented. For example, officials from one industry group said that SEC designated the existing thresholds to identify a subset of the population that does not need SEC’s regulatory investor protection and therefore the thresholds should be readjusted periodically to ensure that the thresholds only capture the intended population. Others suggested the thresholds are very outdated and should be raised even higher than inflation. However, almost an equal number of market participants (11 of 27) said the current thresholds should not be adjusted. For example, a number of participants commented that the net worth threshold already had been significantly adjusted because the primary residence is no longer considered as part of the net worth calculation. In addition, officials from an investor association said that regulators should take into account regional differences when considering changes to the thresholds as they represent a significant dollar amount in certain states with lower average income levels. Thus, adjusting the existing criteria could negatively affect some regions more than others. For example, in a rural state increased thresholds could negatively affect investors’ ability to qualify under the standard and make it harder for local start-up companies to seek investors and capital inside the state, according to an investor association. Market participants had differing views on the impact of increasing the thresholds on investor protection (see fig. 3). Some thought that investor protection would be strengthened, and a similar number responded that increasing the current qualification criteria would have no effect on investor protection. Thirteen of the 27 individuals we interviewed said that increasing the minimum annual income requirement would strengthen investor protection. Ten of the 27 said it would have no effect, 3 said it would weaken, and 1 said they did not know. Similarly, 13 of the 27 market participants said that increasing the minimum net worth requirement would strengthen investor protection. Twelve said it would have no effect and 2 said it would weaken. Investor and industry associations that we interviewed separately did not support adjusting the thresholds of the existing criteria to increase investor protection. An investor association and an academic acknowledged the importance of establishing thresholds to protect investors who may not have the knowledge or ability to protect themselves against making risky investments. Regardless of the thresholds, a representative from an investor association said that its investors completed due diligence on potential investments to protect their financial portfolio. Furthermore, industry associations representing attorneys and financial advisers involved in private placement generally agreed that the current qualification criteria were adequate. They said that any changes to the thresholds might provide only marginal increases to investor protection. Market participants’ views were more similar on the impact of increasing the income and net worth thresholds on capital formation for businesses (see fig. 4). Specifically, most said that increasing the thresholds would limit capital formation. Eighteen of the 27 individuals said increasing the minimum annual income requirement would limit capital formation. Eight of the 27 said it would have no effect and 1 said it would promote capital formation. Fifteen of the 27 individuals said increasing the minimum net worth requirement would limit capital formation. Ten said increasing the net worth requirement would have no effect on capital formation and 2 said it would promote it. Representatives from associations representing start-up companies and angel investors said that raising the current thresholds would limit capital formation. Specifically, the representatives said that raising the thresholds would reduce the current number of angel investors—a type of accredited investor who invests in start-up companies—and other eligible accredited investors. A representative of a venture capital association told us that removing the primary residence from the net worth calculation was a significant adjustment to the current standard and that any further changes would further limit the pool of accredited investors, discourage angel investing, and ultimately hinder the ability of start-up companies to access capital. For instance, a representative of a network of angel investors said that he knew other accredited investors in the network who no longer qualified because of the 2011 change in the net worth calculation. However, according to an official from the North America Securities Administrators Association (NASAA) and a former regulator, start-up companies have other avenues to access capital such as new or expanded capital formation programs in the JOBS Act. As a result, the officials said that increasing the minimum thresholds would not limit a start-up company’s ability to access capital. According to SEC staff, it does not have a list of accredited investors because maintaining such a list would be impractical because there are so many accredited investors and could raise privacy concerns. If maintaining such a list were possible, the costs of doing so would likely outweigh the benefits. Therefore, determining the potential impact of changes to the existing thresholds on the pool of accredited investors is difficult. To estimate such impacts, we analyzed Survey of Consumer Finances data, the data that SEC also uses for these purposes. These data show that changing the thresholds for net worth and income would affect the pool of eligible accredited investors. Specifically, we found that increasing the minimum net worth or income thresholds would decrease the number of eligible household investors. Our analysis of federal household net worth data showed that adjusting the minimum thresholds to account for inflation from $1 million to approximately $2.3 million—the inflation adjusted amount—would decrease the number of households qualifying as accredited from approximately 8.5 million to approximately 3.7 million. Table 2 illustrates how further adjusting the thresholds affects the number of eligible investors. Market participants with whom we spoke said adjusting the thresholds upward for the existing net worth and income criteria would be feasible, but market acceptance would depend on how much the thresholds were adjusted. The majority of market participants (25 of 27) said it would be feasible to increase the income requirements and 24 of 27 said it would be feasible to increase the net worth thresholds. Sixteen of 27 market participants said the market likely would accept increasing the minimum annual income requirement, and 17 said the market likely would accept increasing the minimum net worth requirement. The extent to which the market would accept increased thresholds would depend on which part of the market is considered. For example, according to four representatives from associations, the small business and investor communities would be resistant to increased thresholds because this would limit the pool of investors. However, according to NASAA, and an a consumer advocacy group they would support increased thresholds because it would bolster investor protection. One market participant noted that increasing the thresholds could be a problem for some existing investors if they no longer qualified under the new thresholds, and consideration would have to be given to their existing investments. In our other interviews, market participants said that the current thresholds provide certainty and said while it is feasible to adjust them because they have not been adjusted, it would increase costs and therefore not be well received by all market participants. For example, one industry expert and two representatives from investor associations said that the current criteria have the advantage of providing certainty for the industry, which encourages investments in start-up companies. However, the industry expert also thought the standard was overinclusive because of the increased number of people who can now meet the thresholds compared with the numbers eligible when the thresholds were first put into effect. Two representatives from industry associations also said that adjusting the thresholds upward would increase costs by changing internal systems and procedures for assessing investors’ status, making the adjusted thresholds less accepted. In addition to asking about the existing criteria, we asked market participants about eight alternative criteria related to investors’ financial resources and their understanding of financial risk. We identified these alternative criteria from previous SEC proposed changes to the accredited investor standard, relevant academic literature, and criteria used by foreign regulators. While citing net worth as the most important, market participants indicated that using criteria for financial resources (25 of 27) and understanding of financial risk (22 of 27) could both be important to determine accredited investor status. Among the financial resources criteria, market participants thought a liquid investments requirement (that is, a minimum dollar amount of investments in assets that can be easily sold, are marketable, and the value of which can be verified by a financial institution) was the most important for balancing investor protection and capital formation. Among the understanding of financial risk criteria, market participants selected the use of a registered investment adviser as most important (see fig. 5). Market participants offered varying views on alternative criteria, citing advantages and disadvantages of either replacing criteria related to an investor’s financial resources or adding criteria related to an investor’s understanding of financial risk. Liquid investments requirement. Many market participants told us that the liquid investment criteria could either replace the current criteria or be added to it. A liquid investment requirement would require an investor to have a minimum dollar amount of investments in liquid assets, such as assets that can be easily sold, are marketable, and the value of which can be verified by a financial institution. As shown in figure 6, 13 of 27 market participants said that replacing the existing criteria with a new requirement related to an investor’s liquid investments would strengthen investor protection; however, 16 said it would limit capital formation. Many market participants thought that the liquid investments requirement should be considered as an addition to the existing accredited investor criteria. Three market participants said it was indicative of cumulative investment experience or an investors’ understanding of financial risk. Three market participants additionally commented that it was important for investors to have liquid assets because securities that are liquid could be easily liquidated to cover setbacks in investments. Most market participants (17 of 27) said it would be feasible to replace the existing accredited investor standard with a liquid investment requirement. However, they were divided about whether it was likely to be accepted by the market (13 likely to accept, 13 unlikely to accept). Another association commented that a liquid investments requirement would be straightforward to implement. However, a few others said that investment portfolios might be difficult to verify and that asking financial institutions to verify investments would make this criterion less feasible. Two market participants as well as a representative from an industry association said that clearly defining liquid investments was important and that the definition would affect whether the criterion would be feasible. Fixed-percentage investment. Only one-third of market participants told us that the fixed-percentage criterion would strengthen investor protection (9 of 27). Few market participants told us that fixed- percentage should be considered as an addition to the current criteria (5 of 27). A fixed- percentage investment would require an investor to limit their investments in a single, nonpublic securities offering to a certain percentage of their individual net worth or income. Many market participants said replacing the current standard with fixed- percentage investments would have no effect on investor protection (14 of 27) and that it would limit capital formation (15 of 27). Five of 27 market participants thought that the fixed-percentage requirement should be considered as an addition to the current standard. Two market participants noted that this criterion would help to promote diversification in an investor’s portfolio and could promote investor protection. For example, one representative from an industry association told us that an individual’s investment portfolio would better approximate financial resources than net worth or income and another representative from an industry association said it would help to minimize risk posed to investors. However, in general, market participants thought that replacing the existing criteria would not be feasible (18 of 27) and not likely be accepted by the market (19 of 27). Two market participants commented that it would be difficult to implement and it might be burdensome to update and monitor the percentage figure. Furthermore, one market participant said that if an investor is only able to invest a small amount due to the fixed-percentage limit entrepreneurs might not find that small amount helpful. Market participants also offered a range of views on alternative criteria related to an investor’s understanding of financial risk. Figure 7 summarizes participants’ views on these alternatives. Registered investment adviser requirement. Of the six alternative criteria related to an investor’s understanding of financial risk, participants most often said that the registered investment adviser criterion should be added to the existing criteria. That is, the criterion would require an investor who wishes to invest in a private placement offer to use the services of a registered investment adviser to manage their investment accounts. A majority of market participants (14 of 27) thought that adding a registered investment adviser criterion to the current standard would strengthen investor protection but were divided about its effects on capital formation (see fig. 7). Some market participants and an industry association we interviewed noted that advisers examined the financial needs of their clients. As a result, they determined this requirement would promote investor protection. In addition, most market participants (21 of 27) said that having a registered investment adviser criterion would be feasible to implement and that the market would be willing to accept it. For example, two market participants told us that this criterion could be feasible to implement because advisers were already vetted (registered and subject to regulation). Another market participant said that having an adviser would be a practical and objective way to approximate understanding of financial risk. However, one market participant said that this standard represents a new cost to investors because they would have to pay for the services of a registered adviser. In addition, representatives from an investment adviser firm and an angel investor group said that investment advisers might not recommend private placements because of their asset management role and compensation structure. That is, investment advisers buy and sell assets on behalf of their clients and generally base their fees on the assets managed. Because investments in private placements would be illiquid, investment advisers would not be able to manage the asset to generate greater return. As a result, the compensation structure for investment advisers would not readily accommodate private placements. Self-certification investor designation. While market participants were divided on the impact of adding a self-certification criterion on investor protection, they generally agreed on the impact on capital formation. Through a self-certification process the investor would self-certify based on standards, such as membership in a network of investment groups, work experience (director of a company), or investment experience. Ten market participants told us a self- certification process would weaken investor protection; 11 said it would have no effect. However, the majority (17 of 27) said it would promote capital formation. One market participant expressed concerns that individuals could self-certify to become an investor without actually meeting the qualification criteria. Another said that self-certification based on being in an investment group or being a director would not approximate understanding of financial risk. Another market participant commented that having a self-certification option would emphasize to potential investors that they should not take investments in private placements lightly. Market participants had favorable views about the feasibility and likely market acceptance related to a self-certification criterion. A majority of market participants said that self-certification was feasible (19 of 27) and that the market would be willing to accept it (23 of 27). One market participants said that self-certification was similar to the current process in which investors self-certify that they meet either income or net worth thresholds. Moreover, representatives from an investment company association said that investment and work experience would demonstrate an individual’s ability to analyze investments. However, a representative from NASAA cautioned that the standards for qualifying work experience would have to be specific. License and certification standard. Market participants had conflicting views about the effect of adding a new licenses and certification requirement to the existing criteria. Under such a criterion, an investor would have to demonstrate knowledge of financial risks related to private placement investments to receive a license from regulators or authorized third parties. About the same number of market participants said that adding such a requirement would strengthen, weaken, or have no effect on investor protection (10, 8, and 9, respectively). Furthermore, 12 of 27 participants said it would promote capital formation and 10 of 27 said that it would have no effect. Five participants said the requirement would limit capital formation. A market participant said that having a license and certification requirement would encourage unscrupulous companies to develop and issue certification without appropriately making assessments. Another market participant told us that having a license might not mean that these investors could absorb the loss. Many market participants (15 of 27) did not think it would be feasible to add a license and certification requirement. Three market participants and two representatives from associations raised a number of issues including that some investors might be unwilling to take the steps to be licensed. Nevertheless, 16 of 27 market participants thought the market would accept it. Investor sophistication test. Many market participants told us that adding the sophistication test criterion to the existing criteria would not likely be accepted by the market. With this criterion, an investor would complete SEC-approved investor education classes and pass a test that would qualify them as accredited. About the same number of market participants said that adding an investor sophistication test would either strengthen or have no effect on investor protection. Furthermore, 11 of 27 participants said that there would be no effect on capital formation; the remaining were split on whether the test would promote or limit it. While many participants (16 of 27) thought that the market would accept the addition of a test, market participants were split on whether a test would be feasible. Representatives from two industry associations suggested that investors could take the investments segment of the examination for stockbrokers. However, two market participants raised questions about whether investors would be willing to take the test or spend time to prepare for it and take a test to become a certified sophisticated investor. Furthermore, representatives from another association said the test could quickly become overly complex given the variety of investments available and the frequency of changes in investment products. Also, representatives from associations questioned who would administer the test and where the resources to administer the test would come from. Education standard. Under this criterion, an investor would qualify as accredited based on an advanced degree in business or finance, or a chartered financial analyst credential or similar designation. While 14 of 27 market participants told us that an education criterion would have no effect on investor protection, 11 of the participants said education would promote capital formation. For example, one market participant said that the education criterion would allow younger people who might not have financial resources but understood financial risks to invest in private placement offerings. However, two market participants said that having an advanced degree, such as in business, would not necessarily give an investor the necessary knowledge to invest wisely. A majority of market participants said that an education criterion would be feasible (23 of 27) and that the market would be willing to accept it (18 of 27). Representatives from one industry association we interviewed said verifying education would be a simple process. However, one market participant said that market acceptance would depend on the documentation that would be required of the investor. Opt-in provision. Market participants’ views were divided about the inclusion of an opt-in provision. With an opt-in provision, the investor would sign a statement that waives significant rights to file a complaint to seek compensation unless there is fraud on the part of the issuer. A majority (17 of 27) of market participants said that the provision would decrease investor protection. One market participant commented that the provision would decrease investor protection because it would take away the burden of proof from an issuer in relation to fraud. An equal number of participants said that it would promote or have no effect on capital formation. Market participants had more favorable views about the feasibility and likely market acceptance of a new opt- in criterion. A majority of market participants said that it would be feasible (17 of 27) and that the market would be willing to accept it (21 of 27). For example, one market participant said that an opt-in form would be easy to implement. However, two market participants noted that state securities regulators would object strongly to the addition of this criterion because it decreased protection for investors. The intended purposes of the accredited investor standard are to protect investors and streamline capital formation for small businesses. However, beyond excluding an investor’s primary residence in the calculation of net worth in 2010, the standards for qualifying as an individual accredited investor have remained unchanged since the 1980s. As a result, some market participants and policymakers have raised concerns about diminished investor protection as the numbers of individuals able to qualify as accredited has increased over the years. Of the existing criteria for qualifying as an accredited investor, most market participants with whom we spoke said that net worth was the most important criterion for balancing investor protection and capital formation. However, these participants also identified alternative criteria that could also achieve these goals. Specifically, market participants we spoke with most often supported adding an investment requirement or use of an investment adviser as an alternative criterion that would balance investor protection and capital formation, be relatively feasible to implement, and have some level of acceptance by the market. SEC must review the definition of accredited investor every 4 years beginning in 2014. Our report on market participants’ views on the existing and alternative qualification criteria for accredited investor status will be published in July 2013. By examining the potential effects of the existing and alternative criteria when it conducts its review in 2014, SEC could help ensure that it is using the most appropriate ones for qualifying investors as accredited. To further advance the goals of balancing investor protection and capital formation in its accredited investor standard, SEC should consider alternative criteria, including those in this report, to help determine an individual’s ability to bear and understand the risks associated with investing in private placements. For example, market participants that we spoke with identified adding a liquid investment requirement or use of an investment adviser as alternative criteria that would balance investor protection and capital formation, be relatively feasible to implement, and have some level of acceptance by the market. We provided SEC a draft copy of this report for their review and comment. SEC provided written comments that are reprinted in appendix IV. SEC also provided technical comments that were incorporated as appropriate. In its written comments, SEC agreed with our recommendation. Specifically, SEC noted that the Dodd-Frank Act requires the agency to review the definition of accredited investors four years after the law’s enactment. SEC said that it would consider alternative criteria (particularly, adding liquid investments and the use of a registered adviser) when it completes its mandated review. We are sending copies of this report to the Chairman of the Securities and Exchange Commission, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report examines market participants’ views on (1) the existing criteria for qualifying for accredited investor status, and (2) alternative financial and other qualification criteria. For both objectives, we reviewed the Securities and Exchange Commission’s (SEC) current definition and legislative history of the standard. We reviewed SEC’s proposed changes to the accredited investor standard since 2006, market participants’ comment letters to SEC regarding these proposals, and academic literature related to accredited investors and private placements. We interviewed and asked officials from the SEC, Financial Industry Regulatory Authority (FINRA), and North American Securities Administrators Association (NASAA) for their views about the current and proposed accredited investor standards. We asked SEC, FINRA, NASAA, the Department of Treasury, Commodity Futures Trade Commission, and Consumer Financial Protection Bureau about any financial literacy efforts directed at individuals who qualify as accredited investors and they told us that they were not aware of any relevant financial literacy efforts that we could consider. We interviewed officials from the Ontario (Canada) Securities Commission and United Kingdom Financial Services Authority to obtain information about their investor standards, especially those similar to the U.S. accredited investor standard. We conducted interviews with academics; lawyers knowledgeable about accredited investor issues; other securities market and investment experts; trade associations representing investors, capital formation groups, small businesses, and private equity groups. These associations included the Securities Industry and Financial Markets Association, Angel Capital Association (ACA), National Venture Capital Association (NVCA), American Association of Individual Investors (AAII), Investment Adviser Association (IAA), National Small Business Association, SecondMarket, and the Investment Company Institute. To examine market participants’ views on the existing criteria for accredited investor status, and the potential effects of changes to the standard on investor protection and capital formation, we conducted structured interviews by telephone with a judgmental sample of 27 market participants whom we categorized in four groups intended to represent different segments of the accredited investor population: (1) attorneys who have experience in private placement transactions, (2) accredited investors who invest in private placement securities, (3) retail investors who meet the current accredited investor criteria but do not actively invest in private placement securities, and (4) investment advisers and broker- dealers who work with accredited investors. We asked the participants of the structured interviews (market participants) about the feasibility, likelihood of market acceptance, and impact on investor protection and capital formation of adjusting the existing criteria. We worked with associations to identify market participants to interview. We identified seven associations that represent the types of individuals in the four groups described above. These associations are the American Bar Association, the Securities Industry and Financial Markets Association, the Financial Services Institute, ACA, AAII, IAA, and NVCA. We selected these groups from a broader list of organizations we previously identified as knowledgeable about accredited investors or that were recommended by others. These organizations then recruited members to participate in the interviews. We also asked participants for referrals to other individuals who might be interested in participating. Our sample of 27 market participants consisted of 11 attorneys, 5 investment advisers, 5 angel investors, and 6 retail investors who qualify as accredited investors but do not actively invest in unregistered securities. We asked them demographic questions to learn more about their backgrounds and help put their responses in context. We interviewed 11 attorneys, of whom 1 had more than 4 years of experience with Regulation D transactions, 1 had more than 11 years of experience, and 9 had more than 20 years. They represented accredited investors, issuers, and placement agents and practiced in different cities throughout the United States, although 5 were located in New York City or Washington, D.C. We interviewed 5 broker-dealers or investment advisers: 3 said they recommended Regulation D investments to clients at least once a week; 1 said once a month; and 1 said several times a year. The size of their firms ranged from less than 50 employees to 5,000 employees or more. We did not ask the advisers about their geographic location. We interviewed 5 angel investors, 1 of whom had 4 more than 10 years of experience investing in private placements, and 4 who had more than 11 years of experience. They invested in industries including information technology, biotechnology, health care, energy, medical devices and equipment, and energy. The companies in which the group members invested were primarily located in the Midwest, but two individuals invested in start-up companies throughout the United States. We interviewed 6 retail investors over the age of 55, all of whom had at least a bachelor’s degree and 1 of whom occasionally used the services of an investment adviser. They resided in different regions of the United States. We conducted the structured interviews from February through March 2013. One team member asked the questions while a second recorded the responses in a web-based questionnaire. We took steps to minimize errors, such as difficulties interpreting a particular question, by pretesting the interview questions with one member from each of the groups in January 2013. We conducted pretests to make sure that the questions were clear and unbiased and that they did not place an undue burden on respondents. An independent survey specialist within GAO also reviewed a draft of the questions prior to their administration. We made appropriate revisions to the content and format of the questionnaire after the pretests and independent review. The results from the structured interviews are not generalizable to the population of market participants and only represent the opinions of these 27 individuals. However, as described above we took steps to obtain opinions from a diverse group of market participants. Because the population of accredited investors is unknown, we performed an additional analysis to suggest how changing the thresholds for net worth and income would affect the pool of eligible investors. We used data from the Survey of Consumer Finances (SCF), which the Board of Governors of the Federal Reserve System (Federal Reserve) issues every 3 years, to examine how changing the thresholds would affect the pool of eligible households represented by survey respondents. For the purposes of this analysis, a household refers to the primary economic unit within a household (to which the SCF refers as a family). We used net worth and income as defined in a Federal Reserve publication about the SCF. For net worth, we excluded home equity. We did not attempt to determine the appropriate threshold based on a household’s marital status. For the estimates in table 3, we followed SCF guidance and estimated the standard errors (which appear in parentheses). We found the data to be sufficiently reliable to evaluate household income and net worth levels. To examine market participants’ views on alternative financial and other qualification criteria, we identified accredited investor criteria that market participants and others proposed, and identified relevant criteria used by the Ontario (Canada) Securities Commission and the United Kingdom Financial Services Authority to qualify similar types of investors. From the list of proposed and foreign regulator criteria, we identified a list of criteria that was consistent with the intended purpose of the accredited investor standard and that could be used to qualify individual investors as accredited. We grouped these criteria into two categories: investors’ access to financial resources and their understanding of financial risk. (1) Investor’s financial resources Liquid investments requirement. Investors would have to have a minimum dollar amount of investment in liquid assets (that is, assets that can be easily sold, are marketable, and the value of which can be verified by a financial institution). Fixed-percentage investment. Investors would be limited in their investments in single, nonpublic securities offerings to a certain percentage of their individual net worth or income. (2) Investor’s understanding of financial risk Licenses/certification. Investors would have to demonstrate knowledge of financial risks related to private placement investments to receive licenses from regulators or authorized third parties. Investor sophistication test. Investors would complete SEC- approved investor education classes and pass a test that would qualify them as accredited. Self-certifying as a sophisticated investor. Investors would self- certify based on standards such as membership in a network of investment groups, work experience, or investment experience. Education. Investors would qualify as accredited based on an advanced degree in business or finance, a chartered financial analyst credential, or similar designation. Opt-in provision. Investors would sign a statement that waives significant rights to file a complaint or seek compensation unless there is fraud on the part of the issuer. Registered investment adviser requirement. Investors who wished to invest in private placement offerings would have to use the services of registered investment advisers to manage their investment accounts. We asked the 27 market participants if the current standard (income and net worth) should be adjusted and to identify the most important criteria for balancing capital formation and investor protection from the two respective categories. We asked the market participants three different lines of questions: (1) the effect of increasing or decreasing the minimum income or net worth thresholds; (2) the effect of replacing the current standard with an alternative criterion for financial resources; (3) the effect of adding a criterion for understanding financial risk to the current standard. By replace, we meant that potential investors must qualify under the alternative criterion. By add, we meant if potential investors did not qualify under the current standard, they could qualify under an alternative. For each criterion, we asked about the effects on feasibility, likelihood of market acceptance, and impact on investor protection and capital formation. We did not ask about the effects of adding alternative financial resources criteria to the current standard or replacing the current standard with criteria for understanding financial risk. However, we asked market participants if the current standard should be added with a criterion for financial resources and if any criterion for understanding financial risk should be replaced to the current standard. Appendix III includes the questions that we asked market participants and our results. We compared response patterns between the four types of market participants that we interviewed and found that generally, there were little differences in those patterns. A response pattern is the variability in response options chosen. For example, all 11 lawyers responded that the education standard was feasible, whereas other groups responded that it was feasible (12) or somewhat or not feasible (4). However, we cannot conclude that the differences indicate a statistical difference in opinion because of the small and varying sample sizes of each group and the fact that they are not statistically representative samples. We conducted this performance audit from June 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Securities Act of 1933 generally requires securities issuers to register securities offerings with SEC. However, the act exempts certain offerings from registration, which SEC has implemented by regulation when it determines that the registration procedure is not required for investor protection. Regulation D, adopted by SEC in 1982, describes terms and conditions that qualify a securities offering for an exemption from registration. Regulation D contains multiple rules, which include definitions of terms, general conditions applicable to the offerings, requirements for exemptions, and deadlines for notices of claims of exemptions. Table 4 summarizes Rules 504, 505, and 506, which establish registration exemptions. We conducted structured interviews to determine the potential impact on investor protection and capital formation of replacing or adding alterative criteria to the existing criteria for determining accredited investor status. We also asked market participants’ opinions about the potential impact on investor protection and capital formation of marketing and advertising securities offerings under Regulation D, Rule 506. We interviewed 27 individuals representing different segments of the accredited investor population: 11 attorneys who represent issuers of private placements, 5 investment advisers, 5 individuals who are active investors in private placements, and 6 individuals who meet the current criteria for accredited investor status but are not active investors. The interviews were conducted from February through March 2013. For more information about our methodology for designing and conducting these interviews, see appendix I. In addition to the individual named above, Triana McNeil (Assistant Director), Benjamin Bolitzer, Jeremy Conley, William Chatlos, Katherine Bittinger Eikel, Elizabeth Jimenez, Jill Lacey, Angela Messenger, Marc Molino, Barbara Roesmann, Jessica Sandler, and Weifei Zheng made major contributions to this report.
Accredited investors who meet certain income and net worth thresholds may participate in unregistered securities offerings. GAO determined that the intended purposes of the accredited investor standard are to (1) protect investors by allowing only those who can withstand financial losses access to unregistered securities offerings and (2) streamline capital formation for small businesses. To qualify as accredited, SEC requires an investor to have an annual income over $200,000 ($300,000 for a married couple) or a net worth over $1 million, excluding a primary residence. The thresholds were set in the 1980s and 2010. The Dodd-Frank Wall Street Reform and Consumer Protection Act mandates GAO to study the criteria for qualifying individual investors as accredited. This report examines market participants' views on (1) the existing criteria for accredited investor status and (2) alternative criteria. To address these objectives, GAO conducted a literature review, examined relevant data, and interviewed domestic and foreign regulators and industry representatives to identify alternative criteria. GAO also conducted structured interviews of 27 market participants (including broker-dealers, investment advisers, attorneys, and accredited investors). Of the existing criteria in the Securities and Exchange Commission's (SEC) accredited investor standard, many market participants identified net worth as the most important criterion for balancing investor protection and capital formation. For example, two market participants said the net worth criterion, more so than income, likely indicates the investors' ability to accumulate wealth and their investment knowledge. Others noted that some parts of the market might not accept adjustments to the thresholds. For example, an association of angel investors--accredited investors who invest in start-up companies--told GAO that they would be resistant to increased thresholds because it would decrease the number of eligible investors. GAO analysis of federal data on household net worth showed that adjusting the $1 million minimum threshold to approximately $2.3 million, to account for inflation, would decrease the number of households qualifying as accredited from approximately 8.5 million to 3.7 million. While citing net worth as the most important criterion, several market participants GAO interviewed said that alternative criteria related to an investor's liquid investments and their use of an investment adviser also could balance investor protection and capital formation. GAO obtained views on eight alternative criteria that focus on investors' financial resources and their understanding of financial risk--criteria that SEC or industry groups previously proposed or that foreign regulators use. Among the financial resources criteria, market participants with whom GAO spoke most often identified a liquid investments requirement--a minimum dollar amount of investments in assets that can be easily sold, are marketable, and the value of which can be verified--as the most important for balancing investor protection and capital formation. Among the understanding financial risk criteria, market participants most often identified the use of a registered investment adviser. Beginning in 2014, SEC must review the accredited investor definition every 4 years to determine whether it should be adjusted. This study provides a reasonable starting point for SEC's review. Specifically, SEC will have the views of market participants about how existing and alternative qualifying criteria could help determine an investor's ability to bear and understand risks associated with unregistered securities offerings. SEC should consider alternative criteria for the accredited investor standard. For example, participants with whom GAO spoke identified adding liquid investments and use of a registered adviser as alternative criteria. SEC agreed with GAO's recommendation.
Federally recognized tribes have a government-to-government relationship with the United States and are eligible to receive certain protections, services, and benefits by virtue of their unique status as Indian tribes. Federal Indian policy—which has undergone significant changes since the end of the colonial era—has influenced how the federal government has recognized and currently recognizes tribes: Treaty, reservations, and removal era. During this period, the federal government entered into treaties with Indian tribes to, for example, establish peace, fix land boundaries, and establish reservations. For some federally recognized Indian tribes, treaties provided the basis for subsequent actions that established their recognition. Assimilation era. The Act of February 8, 1887, commonly referred to as the General Allotment Act or the Dawes Act, was a comprehensive congressional attempt to change the role of Indians in American society by encouraging assimilation through individual land ownership. Under this policy, tribes surrendered tribally owned land for individual allotments of land and, in some cases, surplus land was sold to white settlers. As a result of this policy, the total amount of tribal land in the United States was reduced by about 90 million acres. Indian Reorganization Act of 1934. In the 1930s and 1940s, federal Indian policies generally reflected a shift away from assimilation policies toward increased tolerance and respect for traditional aspects of Indian culture. The Indian Reorganization Act of 1934 encouraged economic development, self-determination, cultural pluralism, and the Specifically, the act permitted tribes to adopt revival of tribalism. constitutions and organize into federally recognized Indian tribes, including tribes without a common linguistic, cultural, or political heritage that lived together on one reservation. Act of June 18, 1934, ch. 576, 48 Stat. 984 (1934) (known as the Indian Reorganization Act or Wheeler-Howard Act). Termination era. On August 1, 1953, Congress adopted House Concurrent Resolution 108, which established a policy of making Indians “subject to the same laws and entitled to the same privileges and responsibilities” that apply to other citizens and declared that Indian tribes and their members “should be freed from Federal supervision and control.” Subsequently, in the 1950s and 1960s, the federal government terminated its government-to-government relationships with a number of tribes. Congress has since restored government-to-government relationships with 38 tribes that were terminated during the termination era (see app. II for more information about terminated and restored tribes). Self-determination era. Since the 1970s, the federal government has adopted policies to promote the practical exercise of tribes’ inherent sovereign powers, including fostering economic development of Indian land and encouraging self-determination of Indian affairs. For example, the Indian Self-Determination and Education Assistance Act of 1975 enables federally recognized Indian tribes to administer certain federal programs for Indians, which were previously administered by the federal government on their behalf. In 1977, the American Indian Policy Review Commission reported that “he distinction the Department of the Interior draws between the status of recognized and unrecognized tribes seems to be based merely on precedent—whether at some point in a tribe’s history it established a formal political relationship with the Government of the United States.” The commission identified 133 non-federally recognized tribes. At that time, no administrative process was in place for these non-federally recognized tribes to seek federal recognition. In 1978, Interior established an administrative acknowledgment process by which Indian groups could submit a petition to seek federal recognition. Interior maintains a list of entities that have submitted a letter of intent to petition for federal recognition or have initiated the administrative acknowledgment process by submitting a complete petition. This list includes at least 350 entities, according to Interior’s Office of Federal Acknowledgment. The process of developing a complete petition is expensive and may take years. Consequently, as we reported in November 2001, and as of April 2011, most of these entities have not yet submitted a complete petition. information on seven criteria established in the regulations governing Interior’s administrative acknowledgment process. For example, the entity must submit evidence that it has been identified as an American Indian entity on a substantially continuous basis since 1900 and that it has maintained political influence or authority over its members as an autonomous entity from historic times until the present. GAO, Indian Issues: Improvements Needed in Tribal Recognition Process, GAO-02-49 (Washington, D.C.: Nov. 2, 2001). making it the 566th federally recognized Indian tribe. Non-federally recognized tribes can generally seek federal recognition through Interior’s administrative acknowledgment process or through other means, such as congressional action. For example, the Lumbee Tribe of North Carolina— which was the subject of legislation but not federally recognized in 1956—petitioned Interior for recognition and has also sought recognition through legislation. As of April 29, 2011, 17 entities had been granted federal recognition through Interior’s administrative acknowledgment process, and 32 had been denied. Federal recognition of the Shinnecock Indian Nation in New York—the tribe most recently recognized through Interior’s administrative acknowledgment process— became effective on October 1, 2010. No official list of non-federally recognized tribes similar to BIA’s list of federally recognized Indian tribes exists. Non-federally recognized tribes fall into two distinct categories: (1) state-recognized tribes that are not also federally recognized and (2) other groups that self-identify as Indian tribes but are neither federally nor state recognized. Twelve state governments officially recognize one or more non-federally recognized tribes within their borders, according to state officials we spoke with. Each state government determines which groups in the state, if any, should be recognized. Some of these 12 states—such as North Carolina and South Carolina—have formalized their procedures for recognizing tribes, while other states—such as Massachusetts—have not. For example, North Carolina’s Commission of Indian Affairs and South Carolina’s Commission for Minority Affairs have promulgated regulations outlining the process for entities seeking recognition in those states. Furthermore, some states have established state reservations that are not also federal Indian reservations. No federal agency maintains a list of state-recognized tribes and their reservations, but the U.S. Census Bureau collected information about state-recognized tribes as part of its effort to designate American Indian Areas for the Decennial Census of 2010. The U.S. Census Bureau considers an Indian tribe to be state recognized if it is specifically recognized by a state government through treaty (with, for example, 1 of the original 13 colonial assemblies), state legislation, or other formal process. (See app. III for a list of state- recognized tribes that are not federally recognized and more detailed information about how we compiled this list.) In some instances, representatives of state governments have acknowledged the existence of a tribe or its members in the state, but the state has not officially recognized the tribe. Forms of acknowledgment may include a governor’s proclamation or legislative resolution. For example, in March 2009 the Texas Senate and House of Representatives each adopted a simple resolution (voted on only by the house in which it was introduced and not sent to the Governor to sign) to commend and recognize the Lipan Apache Tribe of Texas. The resolution stated that the tribe is the present-day incarnation of the clans, bands, and divisions historically known as the Lipan Apaches, who have lived in Texas and Northern Mexico for 300 years. According to Texas officials, such simple resolutions do not go beyond the bounds and the authority of the house that acts on it and do not officially establish any group as a state- recognized tribe. In another example, the California Native American Heritage Commission maintains a list of organized tribal governments in that state—including both federally recognized Indian tribes and non- federally recognized tribes. Despite acknowledging these organized tribal governments and requiring cities and counties to consult with them under certain circumstances, California does not have a process for officially recognizing non-federally recognized tribes, according to a state official we spoke with. A number of other groups self-identifying as Indian tribes are not federally or state recognized. These include groups in each of the following categories: groups self-identifying as Indian tribes that have initiated but not yet completed Interior’s administrative acknowledgment process, groups self-identifying as Indian tribes that have been denied federal recognition through Interior’s administrative acknowledgment process, Indian tribes whose status as a federally recognized Indian tribe was terminated by the federal government and has not been restored, and other entities that self-identify as Indian tribes. The purpose of the Single Audit Act of 1984, as amended, was, among other things, to promote sound financial management, including effective internal controls, with respect to federal awards administered by nonfederal entities. Under the act, certain nonfederal entities—such as a state, local government, Indian tribe, or nonprofit organization—that expend $500,000 or more in federal awards in a fiscal year must have an audit conducted in accordance with Office of Management and Budget (OMB) Circular No. A-133 and submit a report regarding the audit to the Federal Audit Clearinghouse Single Audit Database. OMB Circular No. A-133 sets forth standards for obtaining consistency and uniformity among federal agencies for the audit of nonfederal entities expending federal awards. Audits of nonfederal entities’ financial statements and schedule of expenditures of federal awards must be conducted by an independent auditor in accordance with generally accepted government auditing standards. The report on the audit provides information about the nonfederal entity, its federal programs, and the results of the audit. Audits are to be completed and the requisite report submitted within the earlier of 30 days after receipt of the auditor’s report or 9 months after the end of the audit period. The federal agency that provides an award directly to the recipient (known as the federal awarding agency) is generally responsible for ensuring that audits for the federal awards it makes are completed and reports are received in a timely manner and in accordance with OMB Circular No. A-133. In cases of continued inability or unwillingness to have an audit conducted, OMB Circular No. A-133 directs federal agencies to take appropriate actions using sanctions such as: withholding a percentage of federal awards until the audit is withholding or disallowing overhead costs, suspending federal awards until the audit is conducted, or terminating the federal award. Of the list of about 400 non-federally recognized tribes that we compiled, we identified 26 that received federal funding for fiscal years 2007 through 2010. Most of the 26 non-federally recognized tribes were eligible for these funds because of their status as a nonprofit or state-recognized tribe (24 out of 26). Most of the 24 federal programs that provided this funding were authorized to fund either nonprofits or state-recognized tribes (18 out of 24). Other non-federally recognized tribes that received funding but were not nonprofits or state-recognized tribes were awarded funding through programs that had authority to fund other types of entities. Out of the approximately 400 non-federally recognized tribes that we identified, we determined that 26 non-federally recognized tribes had received federal funding for fiscal years 2007 through 2010. Twenty-four of these 26 non-federally recognized tribes were organized as nonprofit organizations (see table 1). As nonprofits, these 24 non-federally recognized tribes would be eligible to receive federal funding from any program authorized to fund nonprofits. As we have reported in the past, the federal government is increasingly partnering with nonprofit organizations because nonprofits bring many strengths, such as flexibility to respond to needs and access to those needing services, and in fiscal year 2006, about 700 federal programs provided funding for nonprofits. In addition, we found that 14 of the 26 non-federally recognized tribes that received funding over the 4-year period were state-recognized tribes and would be eligible to receive funding from programs specifically authorized to fund state-recognized tribes. (See app. IV for information about the nonprofit and state-recognition status of non-federally recognized tribes we identified as having received federal funding before fiscal year 2007.) Of the 26 non-federally recognized tribes that received federal funding over the 4-year period, 2—Eel River Tribe of Indiana and the Powhatan Renape Nation—were neither nonprofits nor state-recognized tribes at any time in fiscal years 2007 through 2010. Federal grants awarded to these two entities are described later in this report. As of April 29, 2011, at least 24 of the 26 non-federally recognized tribes have pursued, or expressed an interest in pursuing, federal recognition through Interior’s administrative acknowledgment process. Of these 24 entities, 13 have submitted letters of intent to petition for federal recognition and have advanced no further in the process, 5 have been denied, 2 have received federal recognition, 1 had a proposed negative finding published in December 1994, 1 has been under active consideration since November 2010, 1 was deemed ineligible to apply through the administrative acknowledgment process, and 1 requested that its petition be placed on hold. We identified 24 federal programs that provided funding to the 26 non- federally recognized tribes for fiscal years 2007 through 2010. Specifically, of the 24 programs we identified, 11 programs were authorized to fund nonprofits, 6 had explicit statutory or regulatory authority to fund state-recognized tribes, 1 had authority to fund tribes located on state reservations, and 1 was authorized to fund state- recognized tribes on state reservations. See table 2 for information on the 24 programs, including their assigned number from the Catalog of Federal Domestic Assistance (CFDA). This catalog is administered by the General Services Administration, and it provides information on federal domestic assistance programs, including grant programs. (See also app. V for a list of all federal programs with explicit statutory or regulatory authority to fund state-recognized tribes.) Some of these 24 programs were also authorized to fund other eligible entities, and in some cases non-federally recognized tribes could have received funding if they met the eligibility requirements for the other entities. Examples of the 24 federal programs that awarded funding to non- federally recognized tribes in the 4-year period and that are authorized to fund nonprofit organizations that meet all applicable eligibility requirements include the following: the Rural Business Enterprise Grants program, administered by the Department of Agriculture, which has statutory authority to fund public bodies, including Indian tribes on state reservations, and private nonprofit corporations for measures designed to finance and facilitate development of small and emerging private business enterprises, among other measures; the Job Opportunities for Low-Income Individuals program, administered by the Department of Health and Human Services (HHS), which has statutory authority to enter into agreements with nonprofits for the purpose of conducting projects that provide technical and financial assistance to private employers that assist them in creating jobs for low-income individuals; and the Retired and Senior Volunteer Program, administered by the Corporation for National and Community Service, which has statutory authority to make grants to or contract with nonprofits to support programs for certain volunteer service projects for senior citizens. Six of the 24 programs were not authorized to fund either nonprofits or state-recognized tribes, but non-federally recognized tribes received funding under these programs for other reasons. For example, the Department of Housing and Urban Development’s (HUD) Economic Development Initiative-Special Project program awarded funding to the Shinnecock Indian Nation as a result of congressional direction in a committee report accompanying an appropriations act before the tribe was federally recognized. In another example, the Environmental Protection Agency awards environmental justice grants pursuant to research and development provisions in various statutes, such as the Clean Water Act and the Solid Waste Disposal Act, and some of these statutes authorize broad categories of recipients, such as institutions and organizations. In addition, the Department of Education’s (Education) Small, Rural School Achievement Program can provide grants to entities as local education agencies, and the Haliwa-Saponi Indian Tribe of North Carolina received a grant because it is so organized. The other three programs that were not authorized to provide funding to nonprofits or state-recognized tribes were authorized to provide funding to the following types of entities: Under the Department of Commerce’s Remote Community Alert Systems Program, eligible grant recipients include “tribal communities” that meet certain requirements.Indiana received funding through this program. The Eel River Tribe of The Indian Community Development Block Grant Program is authorized to fund Indian tribes that were eligible recipients under the State and Local Fiscal Assistance Act of 1972. As explained in appendix V, we identified at least six non-federally recognized tribes that have received funding under the act, as well as five federally recognized Indian tribes that received funding before the effective date of their federal recognition. The Community Development Financial Institutions Fund can provide technical assistance grants for activities that enhance the capacity of a community development financial institution, which is an entity that meets the following five criteria: (1) has a primary mission of promoting community development; (2) serves an investment area or targeted population; (3) provides development services in conjunction with equity investments or loans, directly or through a subsidiary or affiliate; (4) maintains, through representation on its governing board or otherwise, accountability to residents of its investment area or targeted population; and (5) is not an agency or instrumentality of the United States, any state, or political subdivision of a state. The Lumbee Revitalization and Community Development Corporation, which was established by the Lumbee Tribe of North Carolina to foster economic development, received funding through this program. Some non-federally recognized tribes may have been eligible for some of the federal funding they received through several means. For example, a non-federally recognized tribe that was organized as a nonprofit and was also state recognized may have been eligible to apply for some funding as either a nonprofit or a state-recognized tribe. In these cases, we did not attempt to determine the basis on which the funding was awarded. Federal agencies awarded more than $100 million in funding to the 26 non-federally recognized tribes for fiscal years 2007 through 2010. As shown in table 3, the majority of this funding was awarded to a small number of non-federally recognized tribes. For example, funding to the Lumbee Tribe of North Carolina accounted for about 76 percent of the federal funding that we identified. Overall, the Lumbee Tribe of North Carolina received more than $78 million awarded by 10 programs in six federal agencies during the 4-year period. Some of this funding was awarded to incorporated entities—such as the Lumbee Regional Development Association—that were created by the Lumbee Tribe of North Carolina to provide services to the Lumbee Indian community. Most of the funding (75 percent) awarded to the Lumbee Tribe of North Carolina was awarded by HUD’s Indian Housing Block Grants program. Similarly, as shown in table 4, nearly all the federal funding we identified as having been awarded to the 26 non-federally recognized tribes over the 4-year period was awarded by a small number of federal programs. Specifically, 95 percent of the funding was awarded by seven federal programs in four agencies. Each of these programs awarded a total of more than $1.5 million to non-federally recognized tribes during the period. About 67 percent (nearly $69 million) of the funding we identified was awarded by HUD’s Indian Housing Block Grants program. Five non-federally recognized tribes have received funding through this program: the Lumbee Tribe of North Carolina (see sidebar), the MOWA Band of Choctaw Indians, the Haliwa-Saponi Indian Tribe of North Carolina, the Coharie Tribe of North Carolina, and the Waccamaw Siouan Tribe of North Carolina. To determine funding amounts for all federally recognized Indian tribes and eligible non-federally recognized tribes, the program uses a formula that considers factors such as the population and housing conditions of tribal communities. Program funds are used to support such activities as housing development. During our review, we identified some instances where federal agencies had provided funds to non-federally recognized tribes for which grant eligibility is disputed and one instance where an agency is trying to enforce the Single Audit Act’s reporting requirements. Specifically, when we compared the eligibility requirements for each federal program that provided funding to non-federally recognized tribes for fiscal years 2007 through 2010 with the characteristics of each entity, we found that Education funded some non-federally recognized tribes under the American Indian Vocational Rehabilitation Services Program that appear to be ineligible and that HHS funded two non-federally recognized tribes in New Jersey that the state does not consider to be state recognized. In addition, we identified one instance where HHS has initiated action to better ensure that a non-federally recognized tribe completes its required financial report under the Single Audit Act. “The term ‘reservation’ includes Indian reservations, public domain Indian allotments, former Indian reservations in Oklahoma, and land held by incorporated Native groups, regional corporations, and village corporations under the provisions of the Alaska Native Claims Settlement Act.” Education told us that under its long-standing interpretation of this “reservation” includes not only the four categories enumerated definition, in the statute, but also “a defined and contiguous area of land where there is a concentration of tribal members and in which the tribe is providing structured activities and services, such as the tribal service areas identified in the tribe’s grant application.” Education’s interpretation allows a tribe to self-define “reservation” by designating any area and providing structured services to tribal members there. Rules of statutory construction weigh against this interpretation. Under the noscitur a sociis rule, words grouped in a list should be given related meaning. Courts rely on this rule “to avoid ascribing to one word a meaning so broad that it is inconsistent with its accompanying words, thus giving unintended breadth to the Acts of Congress.”definition of “reservation,” the area should, like the four areas designated in the statute, be established by or pursuant to a treaty, statute, regulation, executive order, or other formal government recognition. The tribal service areas that constitute “reservations” for purposes of Education’s grants to the United Houma Nation, Lumbee Tribe of North Carolina, Choctaw-Apache Tribe of Ebarb, and Four Winds Cherokee, by Applied here, this rule means that to satisfy the statutory contrast, have been delineated by the tribes themselves, which have then applied for federal grants on the basis of these self-created geographic areas. This approach essentially transforms the grant program into one for “Indian tribes no matter where they are located, to provide assistance to disabled Indians no matter where they reside.” Such an outcome is not what Congress intended or enacted. In addition, the statute’s use of both the term “reservation” and “tribal service area” undercuts Education’s interpretation because it suggests Congress did not believe a tribal service area is simply a type of reservation. The statute authorizes the department to make grants to certain governing bodies of Indian tribes on federal and state reservations and, elsewhere, discusses how Indians with disabilities living on or near a reservation or tribal service area are to be provided vocational rehabilitation services. If the term “reservation” included the term “tribal service area,” Congress would not have referred to them separately in section 101(a)(11)(F)(ii) of the Rehabilitation Act of 1973, as amended in 1998. The state-recognition status of some non-federally recognized tribes in New Jersey has been called into question. The two non-federally recognized tribes in New Jersey that received federal funding in the 4-year period covered by our review were the Nanticoke Lenni-Lenape Indians of New Jersey and the Powhatan Renape Nation. About 30 years ago, the New Jersey legislature passed concurrent resolutions regarding the Confederation of Nanticoke-Lenni Lenape and the Powhatan Renape Nation. Concurrent resolutions are not signed by the Governor and therefore do not have the force of law. In addition, these tribes are referred to in a few state statutes, including the statute establishing the New Jersey Commission on Native American Affairs. Since at least December 2001, however, the state of New Jersey has officially taken the position in correspondence with the Department of the Interior that these actions do not constitute official state recognition and therefore these tribes are not considered to be state recognized. In addition, in 2002, New Jersey law was amended to require specific statutory authorization for the recognition of the authenticity of any group as an American Indian tribe. No state laws contain specific statutory authorization for state recognition of any Indian tribe in New Jersey. Notwithstanding this history, the Nanticoke Lenni-Lenape Indians of New Jersey and the Powhatan Renape Nation continue to consider themselves, and have represented themselves to HHS, as state-recognized tribes in applying for funding awarded by the Community Services Block Grant program and Low-Income Home Energy Assistance program, respectively, during the 4-year period of our review. These programs, however, are authorized to fund only states and federally and state-recognized Indian tribes. The Nanticoke Lenni-Lenape’s application repeatedly stated that it was state recognized and included the concurrent resolution from the 1980s as evidence of state recognition. Agency officials told us they did not take any additional steps to verify that the entity was in fact state recognized; they also said they have had regular contact with state officials about the awards, and those state officials never told them that the tribe was not state recognized. On the basis of this statement in the grant application, the Nanticoke Lenni-Lenape Indians of New Jersey received two grants totaling $44,405 from the Community Services Block Grant Program—one in fiscal year 2007 and one in fiscal year 2010. Regarding the Low-Income Home Energy Assistance program, the Powhatan Renape Nation presented itself as state recognized in a letter submitted with its grant application. Agency officials stated that the entity’s state-recognition status would have been verified when it started receiving program funding about 30 years ago. During the 4-year period of our review, the Powhatan Renape Nation received $200,342 from the Low- Income Home Energy Assistance program in fiscal year 2007. The Accohannock Indian Tribe, one of the non-federally recognized tribes that we identified as receiving federal funds during the period of our review, did not have an audit or submit a report for 2009 as required by the Single Audit Act. According to HHS data, the Accohannock Indian Tribe reported spending over $1 million in federal funds for calendar year 2009 from three different HHS programs—Community Services Block Grant Program Discretionary Awards, Job Opportunities for Low-Income Individuals, and Administration for Native Americans’ Social and Economic Development Strategies Program. The entity’s expenditure of federal funds in 2009 was more than twice the threshold that triggers the Single Audit Act’s requirements. Unless an exception applied, the entity’s audit report would have been due 9 months after the end of the entity’s fiscal year, at the latest. Since the entity’s fiscal year ends on December 31, its audit report would have been due by September 30, 2010, at the latest. As of February 7, 2012, the Accohannock Indian Tribe had not submitted its required audit report for 2009 to the Federal Audit Clearinghouse, which maintains the Single Audit Database. HHS is the federal awarding agency responsible for ensuring the entity’s compliance with the act because it is the only agency that awarded the Accohannock Indian Tribe federal funds in 2009. The entity’s noncompliance with the Single Audit Act was flagged, and the agency sent a letter of inquiry to the Accohannock Indian Tribe on March 8, 2011. According to agency officials, they did not receive a response to their letter. More recently, after we began inquiring about the issue, the agency sent the entity an Audit Deficiency Notice via certified mail on February 7, 2012. According to agency officials, because the Accohannock Indian Tribe is not a current grantee, the agency has limited enforcement options. Nevertheless, agency officials stated they plan to continue to pursue their administrative options to encourage the Accohannock Indian Tribe to meet its federal financial reporting obligations. Determining the state-recognition status of non-federally recognized tribes can be a difficult and confusing task, as is trying to determine which entities have state reservations. No official consolidated list of state-recognized tribes or state reservations exists, and states have varying policies and procedures for providing state recognition. Nevertheless, Education provided funds under the American Indian Vocational Rehabilitation Services Program to state-recognized tribes that were not located on state reservations but rather had self-defined tribal service areas, raising substantial questions about the tribes’ eligibility for the funding, and HHS provided funds to two non-federally recognized tribes in New Jersey as state-recognized tribes that are not officially recognized by the state. Finally, when entities expend $500,000 or more in federal funds in a fiscal year, they are required to comply with the reporting requirements of the Single Audit Act. The Accohannock Indian Tribe did not comply with those requirements for 2009. In response, HHS has begun taking administrative steps to encourage the Accohannock Indian Tribe to meet its federal financial reporting obligations. We are making the following three recommendations. To ensure that grants under the American Indian Vocational Rehabilitation Services Program are made consistent with applicable law, we recommend that the Secretary of Education review the department’s practices with respect to eligibility requirements and take appropriate action with respect to grants made to tribes that do not have federal or state reservations. To ensure the proper award and oversight of grants by the Department of Health and Human Services, we recommend that the Secretary of Health and Human Services take the following two actions. Investigate whether the Nanticoke Lenni-Lenape Indians of New Jersey and the Powhatan Renape Nation met the statutory eligibility requirements for the grants they were awarded and take appropriate action as necessary. In doing so, the agency should consult with the state of New Jersey to determine whether the state has officially or formally recognized the tribes and treats them as state recognized. Continue to pursue the Accohannock Indian Tribe’s noncompliance with the Single Audit Act and take appropriate action as necessary. We provided a copy of our draft report to the Departments of Agriculture, Education, Energy, the Interior, Labor, and HHS. In its written response, reprinted in appendix VI, Education stated its commitment to review its practices for determining eligibility for the American Indian Vocational Rehabilitation Services Program and taking appropriate action, if helpful, such as clarifying its interpretation. In addition, Education requested that we change the phase “appropriate corrective action” in the recommendation to simply “appropriate action.” We made that change to the recommendation. Education’s commitment to review its practices and take appropriate action is consistent with our revised recommendation. While agreeing to review its practices, Education disagreed with our finding (which was the basis for the recommendation) that state- recognized tribes without state reservations, but with self-defined tribal service areas, are likely ineligible for the American Indian Vocational Rehabilitation Services grant program. Education stated that its interpretation of the term “reservation” was reasonable and that we should defer to its interpretation. However, for the reasons detailed in the report, we continue to have substantial questions about whether Education reasonably interpreted the statutory definition of “reservation” for the American Indian Vocational Rehabilitation Services grant program to include tribal service areas. We note that Education’s interpretation effectively rewrites the statute to allow tribes to unilaterally self-define a reservation and then apply for a grant. That reading appears to go beyond Congress’s intent and language. In addition, Education states that its interpretation is supported by the 1998 amendments to the statute, which (1) authorized American Indian Vocational Rehabilitation Service Program grantees to serve American Indians with disabilities living near reservations as well as those living on reservations, and (2) required states and American Indian Vocational Rehabilitation Service Program grantees to enter into cooperative agreements that, among other things, include procedures for ensuring that American Indians with disabilities living near a reservation or tribal service area are provided vocational rehabilitation services. However, expanding the universe of Indians eligible to receive vocational rehabilitation services to those living “near” a reservation does not alter what qualifies as a “reservation.” Moreover, Congress’s use of both “reservation” and “tribal service area” undercuts Education’s interpretation because it suggests Congress did not believe a tribal service area is simply a type of reservation. HHS agreed with the two recommendations involving its activities and stated that it will seek to clarify with the state of New Jersey the status of the Nanticoke Lenni-Lenape Indians of New Jersey and the Powhatan Renape Nation. HHS also stated that it will continue to address issues related to compliance with the Single Audit Act related to the Accohannock Indian Tribe. (See app. VII for HHS’s written comments.) The Departments of Agriculture, Education, the Interior, and HHS also provided technical comments, which we incorporated into the report as appropriate. The Departments of Energy and Labor had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Agriculture, Education, Energy, Health and Human Services, Housing and Urban Development, the Interior, and Labor; appropriate congressional committees; and other interested parties. The report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This review’s objectives were to address (1) the key means by which non- federally recognized tribes have been eligible for federal funding and (2) the amount of federal funding awarded to non-federally recognized tribes for fiscal years 2007 through 2010, by agency and program. In addition, this report provides information about some cases we identified during our work concerning non-federally recognized tribes’ eligibility for funding and compliance with federal financial reporting requirements. Because no comprehensive list of non-federally recognized tribes exists, we first compiled a list of about 400 such tribes in the contiguous 48 states. To compile this list we relied on information from (1) the Department of the Interior, and obtained a list of all those entities that have submitted a letter of intent to petition for federal recognition through the department’s administrative acknowledgment process or have submitted a complete petition but had not received federal recognition as of April 29, 2011; (2) selected states about state-recognized tribes; and (3) other documents and sources that identified other self-identified tribes that are not federally recognized. For this review, we compiled a list of states with state-recognized tribes that are not federally recognized. We compiled this list by gathering information from a variety of sources, including information gathered by Interior’s Indian Arts and Crafts Board on state-recognized tribes in all 50 states and information collected by the U.S. Census Bureau. On the basis of this information, we identified 15 states that were most likely to have state-recognized tribes, and we reached out to state officials in each of these states. On the basis of our discussions with officials in these states, we confirmed that 12 of the 15 states had state-recognized tribes. In addition, state officials in the other 33 states had at one time told Interior’s Indian Arts and Crafts Board that their states had no state- recognized tribes that were not also federally recognized. As part of our contact with these 15 states, in addition to inquiring about any state- recognized tribes, we also asked about whether they were aware of any other non-federally recognized tribes within their borders. For the other 33 states, we relied largely on information provided by these states to Interior’s Indian Arts and Crafts Board. According to this information, these states had identified no state-recognized tribes within their borders. We spot-checked this information by contacting 20 states for which we were able to identify a state official who could respond to our questions, and 9 of these states responded to our inquiry and confirmed that they did not have any state-recognized tribes. To address both objectives of this review, we obtained and reviewed statutes and agency documents; searched databases such as USAspending.gov and the Single Audit Database; analyzed agency- provided data; and met with officials of various federal agencies, including all six agencies with programs that have explicit statutory or regulatory authority to fund certain non-federally recognized tribes, such as state- recognized tribes that are not also federally recognized. These agencies are the Departments of Agriculture, Education, Energy, Health and Human Services, Housing and Urban Development, and Labor. In addition, we traveled to North Carolina and Virginia to meet with state officials, representatives of Native American organizations, and officials from several non-federally recognized tribes located in those states. We selected North Carolina because it had the highest concentration of non- federally recognized tribes that we identified as having received funding during the 4-year period (7 out of 26), including three of the top four recipients of federal funds during the period. We included Virginia because of its proximity to Washington, D.C., and because tribes in the state have received federal funding in the past. To identify which non-federally recognized tribes had received federal funding during the period 2007 through 2010 and which programs had provided this funding, we searched publicly available funding data from USAspending.gov and agency-provided data. USAspending.gov, a publicly accessible website containing data on federal awards, was launched in December 2007 to comply with the requirements of the Federal Funding Accountability and Transparency Act of 2006. Under that act, federal agencies are required to report information about all federal awards of $25,000 or more. As a result of these searches, we determined that 26 of the about 400 non-federally recognized tribes that we had identified had received federal funding in the 4-year period from 24 federal programs. To determine by which means non-federally recognized tribes were eligible to receive federal funding, we reviewed the authorizing statutes, program regulations, and eligibility requirements for all 24 programs that had awarded funding to the 26 non-federally recognized tribes during fiscal years 2007 through 2010. In addition, we collected information about the organization and legal status of the 26 non-federally recognized tribes. For example, we searched Internal Revenue Service data to identify which of the receiving entities were organized as nonprofits at any time during the 4-year period. We then analyzed how each entity could have qualified for the funding it received, by comparing the organizational and legal status of the recipient with the statutory and regulatory authority of the awarding program. When a non-federally recognized tribe was eligible to receive federal funding from a program through several means, we did not attempt to single out which means qualified the tribe for the funding received. In those instances where we could not identify any means by which a non-federally recognized tribe was eligible for funding received, we contacted agency officials to determine how the entity had qualified. For example, we contacted two agencies to determine how non- state-recognized tribes in New Jersey qualified for funding from programs that are authorized to fund state-recognized tribes but not to fund other non-federally recognized tribes. For each non-federally recognized tribe we identified, we searched USAspending.gov and agency-provided data on relevant identifying information, including tribal names provided by Interior for petitioners and by state officials for state-recognized tribes and other non-federally recognized tribes in their states. Where possible, we used available information such as tribal names and addresses to identify each entity’s Data Universal Numbering System (DUNS) number—a nine-digit number assigned by Dun & Bradstreet, Inc., to identify each physical location for businesses. Entities such as non-federally recognized tribes may have multiple DUNS numbers, and we took steps to identify all relevant DUNS numbers by, for example, searching Dun & Bradstreet’s records for additional DUNS numbers associated with a particular entity, such as a previous DUNS number where applicable. We then searched USAspending.gov and agency-provided data on tribal names and DUNS numbers to compile an updated data set on federal funding awarded by federal agencies in the 4-year period to the non-federally recognized tribes we identified. To determine the amount of federal funding awarded for fiscal years 2007 through 2010 to non-federally recognized tribes, we used the information that we obtained from USAspending.gov and agency-provided data. We supplemented USAspending.gov data with agency-provided data from (1) all seven programs that we identified as each having awarded a total of more than $1.5 million in funding to non-federally recognized tribes in the 4-year period and (2) some additional programs administered by these agencies. Some programs awarded funding appropriated through the annual appropriation process as well as funding appropriated by the American Recovery and Reinvestment Act of 2009. Although the Catalog of Federal Domestic Assistance (CFDA) may assign unique numbers to track these two funding sources, we did not consider the funding as coming from separate programs for the purposes of tallying the number of programs that awarded funding to non-federally recognized tribes. We took a number of steps to assess the reliability of these data. For example, we compared USAspending.gov against information in single audit reports filed by those non-federally recognized tribes that filed these reports for one or more of the fiscal years included in this review. We also tested the data for missing data and outliers, interviewed agency officials from all seven programs that awarded more than $1.5 million to non- federally recognized tribes in the 4-year period to discuss how they collect and maintain these data, and contacted knowledgeable agency officials to resolve any inconsistencies in these data sources. For example, by reviewing single audits completed by the Lumbee Tribe of North Carolina, we identified an award that was not listed in USAspending.gov; we contacted agency officials to confirm the amounts awarded by fiscal year and program and to determine why the award was not listed. An agency official stated that the award was not listed in USAspending.gov because it was below the reporting threshold required by the Federal Funding Accountability and Transparency Act of 2006. On the basis of information provided by the agency, we were able to include the award in our updated data set. After taking these steps, we concluded that the updated data set was reliable for the purpose of estimating the amount of federal funding awarded by federal agencies to non-federally recognized tribes for fiscal years 2007 through 2010. After collecting this information, we compared the characteristics of each non-federally recognized tribe with applicable program eligibility requirements, and we checked for compliance with the financial reporting requirement in the Single Audit Act, as amended. As a result, we identified some instances where federal agencies had made grants to likely ineligible non-federally recognized tribes and where an agency had initiated actions to enforce federal financial reporting requirements. We excluded awards received by non-federally recognized tribes as subawards from other entities, including states, because neither USAspending.gov nor the federal agencies maintain reliable information on subawards. We also excluded loans, procurement contracts, and tax expenditures and did not make any effort to determine the amount of funding received directly by individual members of non-federally recognized tribes through, for example, scholarships awarded by federal agencies. We conducted this performance audit from June 2011 through April 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information about tribes whose relationship with the United States was terminated. These tribes are not eligible to petition for federal recognition through the Department of the Interior’s administrative acknowledgement process, but may have their recognition restored by other means. As of October 1, 2010, federal recognition had been restored for 38 tribes whose relationship with the United States had been terminated (see table 5). These tribes were non-federally recognized upon termination until the effective date of restoration. For a variety of reasons, it is not possible to provide a comprehensive list of all existing tribes whose recognition was terminated and not restored. For example, some tribes located in western Oregon and terminated in 1956 appear to have been historic tribes that once resided in that area but were no longer organized as tribes at the time of their termination, according to the Department of the Interior. Nonetheless, we identified nine tribes in California whose recognition was terminated and not restored and therefore are non-federally recognized tribes (see table 6). This appendix provides information on state-recognized tribes. We identified 12 states that had state-recognized tribes. Officials in these 12 states identified 61 state-recognized tribes that are not federally recognized, as of September 2011 (see table 7). Some states—such as California and Texas—have acknowledged non-federally recognized tribes in their states but have not officially identified these entities as state recognized, according to officials we spoke with. We cross-checked the information we received from state officials with information included in (1) a 2008 law review article, (2) the 2010 Census, and (3) correspondence between the states and Interior’s Indian Arts and Crafts Board. Trying to reconcile differences among these sources highlighted the difficulties inherent in trying to develop a comprehensive list of state-recognized tribes. For example, questions were raised about the legal status of legislative resolutions that are not signed by a state’s governor and the significance of other types of designations, such as “historic” or “acknowledged” tribes. Nonetheless, except for some recent events that have occurred since the 2008 law review article was written and the 2010 Census information was compiled, and a couple of other notable exceptions, the information that we present in table 7 generally matches these other sources. The notable exceptions include the following: We did not include any entities from New Jersey in table 7. New Jersey does not have any state-recognized tribes, according to New Jersey officials whom we spoke with, as well as correspondence between New Jersey officials and Interior’s Indian Arts and Crafts Board. The 2008 law review article counted three potentially state- recognized tribes in New Jersey, and the 2010 Census data counted two state-recognized tribes. We did not include any entities from California and Ohio in table 7. The officials whom we spoke with from those states indicated that their states had not established processes for officially recognizing tribes. The 2010 Census data and the correspondence with Interior’s Indian Arts and Crafts Board also confirmed that those states have no state-recognized tribes. On the basis of legislative resolutions in California and Ohio that were not signed by the states’ governors, the 2008 law review article counted two state-recognized tribes in California and one state-recognized tribe in Ohio. We included six entities for Massachusetts in table 7 on the basis of the designation provided by a state official. The official also stated that Massachusetts does not have an established formal process for granting state recognition and that these entities are acknowledged by Massachusetts as historic tribes. The 2008 law review article included the same entities as state-recognized tribes, while the 2010 Census counted only one entity in Massachusetts as state recognized. In correspondence with Interior’s Indian Arts and Crafts Board, dated November 16, 2000, the state provided a list of the historic tribes but also stated “here are no officially state recognized tribes.” Some of the state-recognized tribes listed in table 7—such as the Mattaponi Tribe and the Pamunkey Indian Tribe—were recognized by colonial governments and considered state recognized since the beginning of statehood, while others became state recognized more recently, according to state officials we spoke with. For example, the two state-recognized tribes in Vermont were recognized in 2011, according to an official in that state. In some instances, state governments acknowledged a group long before officially designating it as a state- recognized tribe. Furthermore, some state governments have procedures for recognizing tribes today. As a result, the number of state-recognized tribes may increase, according to state officials we spoke with. According to some state officials and state websites, their states officially recognize certain Indian entities, but these entities are not considered tribes. For example, South Carolina recognizes “Native American Indian groups,” which the state defines as a number of individuals assembled together, which have different characteristics, interests, and behaviors that do not denote a separate ethnic and cultural heritage today, as they once did. That state also recognizes Native American Special Interest Organizations, which promote Native American culture and address socioeconomic deprivation among people of Indian origin, such as the Little Horse Creek American Indian Cultural Center. In another example, the state of North Carolina recognizes four Indian organizations, each of which represents Indian communities in one or more counties. This appendix provides information on non-federally recognized tribes that received federal funding before fiscal year 2007. During our review, we identified a significant number of non-federally recognized tribes that received federal funding before fiscal year 2007, as shown in table 8. However, we also found that the publicly available funding data for the pre-2007 period were neither complete nor comprehensive, and therefore we have not included funding that these tribes received in table 8. In addition to the 64 non-federally recognized tribes listed in table 8, we were able to identify that all of the 26 non-federally recognized tribes having received funding in fiscal years 2007 through 2010 also received funding before fiscal year 2007, except two—Eel River Tribe of Indiana and Wesget Sipu. Cumulatively, counting the post- and pre-2007 periods, we identified 117 non-federally recognized tribes that have received federal funding—26 in fiscal years 2007 through 2010, 64 additional unique non-federally recognized tribes before fiscal year 2007, and 27 additional federally recognized tribes that received funding before the effective date of their recognition. This appendix provides information on various statutory and regulatory provisions that authorize federal programs to provide federal funding for non-federally recognized tribes. Specifically, as shown in table 9, there are 34 federal programs that have explicit statutory or regulatory authority to fund state-recognized tribes, tribes on state reservations, or tribes on or in proximity to a state reservation or rancheria. In addition to the statutes and regulations included in table 9, other statutes and regulations also explicitly include state-recognized tribes. For example: State-recognized tribes are eligible to participate in the Small Business Administration’s section 8(a) business development program. This program is one of the federal government’s primary means for developing small businesses owned by socially and economically disadvantaged individuals. The Indian Arts and Crafts Act, as amended, authorizes Indian groups that have been formally recognized as Indian tribes by a state legislature or by a state commission or similar organization legislatively vested with state tribal recognition authority to bring lawsuits against persons who offer for sale or sell a good in a manner that falsely suggests it is Indian produced. Under the act, it is unlawful to offer or display for sale or to sell any good in a manner that falsely suggests it is Indian produced, an Indian product, or the product of a particular Indian or Indian tribe or Indian arts and crafts organization resident in the United States. Under the act and its implementing regulations, an Indian is an individual who is a member of an Indian tribe—a federally or state-recognized tribe—or who is certified by an Indian tribe as a non-member Indian artisan. Regulations implementing the Native American Graves Protection and Repatriation Act authorize museums and federal agencies to transfer control of culturally unidentifiable Native American human remains in their possession or control to non-federally recognized Indian tribes under certain circumstances. Additionally, Agriculture’s Rural Housing Preservation Grants program and the Department of Housing and Urban Development’s (HUD) Indian Community Development Block Grant Program have explicit statutory or regulatory authority to fund certain non-federally recognized tribes. Specifically, eligible grant recipients under this statute and regulation include any Indian tribe which had been eligible under the State and Local Fiscal Assistance Act of 1972. The act, which was repealed in 1986, established a trust fund in the U.S. Treasury for a revenue-sharing program among federal, state, and local governments, which included Indian tribes and Alaska Native villages having a recognized governing body which performs substantial governmental functions. The act did not define Indian tribe, but the Department of the Interior’s Bureau of Indian Affairs (BIA), in cooperation with the Bureau of the Census, developed a list of tribes and villages eligible to participate in the revenue-sharing program. BIA officials were not able to locate a copy of this list. We identified six non-federally recognized tribes—all of which were state recognized—that received revenue-sharing payments and five federally recognized tribes that received revenue-sharing payments before they became federally recognized.tribes are eligible recipients of Rural Housing Preservation Grants program funding and Indian Community Development Block Grant Program funding. Therefore, at least six state-recognized Finally, some statutes identify members of state-recognized tribes but not the tribe itself as an eligible participant or recipient of a government program. For example, the Indian Health Service provides health care services for members of federally recognized tribes as well as urban Indians. The statutory definition of urban Indians includes Indians who are members of a tribe or other organized group of Indians recognized by the state in which they reside. Members of state-recognized tribes are also eligible for certain scholarships administered by HHS’s Indian Health Service such as the Indian health professions scholarships. In addition to the individual named above, Jeffery D. Malcolm (Assistant Director), Mariana Calderón, Colleen Candrl, Jennifer Cheung, Ellen W. Chu, Pamela Davidson, Emily Hanawalt, Catherine Hurley, Ben Shouse, and Jeanette M. Soares made key contributions to this report.
As of January 3, 2012, the United States recognized 566 Indian tribes. Federal recognition confers specific legal status on tribes and imposes certain responsibilities on the federal government, such as an obligation to provide certain benefits to tribes and their members. Some tribes are not federally recognized but have qualified for and received federal funding. Some of these non-federally recognized tribes are state recognized and may be located on state reservations. GAO was asked to address (1) the key means by which non-federally recognized tribes have been eligible for federal funding and (2) the amount of federal funding awarded to non-federally recognized tribes for fiscal years 2007 through 2010. GAO also identified some eligibility and federal financial reporting issues related to non-federally recognized tribes. GAO compiled a list of about 400 non-federally recognized tribes and reviewed information from federal agencies, USAspending.gov, states, and other sources to identify tribes’ federal funding and eligibility. Of the approximately 400 non-federally recognized tribes that GAO identified, 26 received funding from 24 federal programs during fiscal years 2007 through 2010. Most of the 26 non-federally recognized tribes were eligible to receive this funding either because of their status as nonprofit organizations or state-recognized tribes. Similarly, most of the 24 federal programs that awarded funding to non-federally recognized tribes during the 4-year period were authorized to fund nonprofit organizations or state-recognized tribes. In addition, some of these programs were authorized to fund other entities, such as tribal communities or community development financial institutions. For fiscal years 2007 through 2010, 24 federal programs awarded more than $100 million to the 26 non-federally recognized tribes. Most of the funding was awarded to a few non-federally recognized tribes by a small number of programs. Specifically, 95 percent of the funding was awarded to 9 non-federally recognized tribes, and most of that funding was awarded to the Lumbee Tribe of North Carolina. Similarly, 95 percent of the funding was awarded by seven programs in four agencies, and most of that funding was awarded by one Department of Housing and Urban Development program. During the course of its review, GAO identified some instances where federal agencies had provided funding to non-federally recognized tribes for which grant eligibility is disputed and one instance where an agency was in the process of better enforcing federal financial reporting requirements with one tribe. Specifically: The Department of Education awarded American Indian Vocational Rehabilitation Services Program funding to the United Houma Nation, the Lumbee Tribe of North Carolina, and a consortium consisting of the Choctaw-Apache Tribe of Ebarb and the Four Winds Cherokee. Each of these four tribes is state recognized, but it appears that none of them has a “reservation” as required by the statute establishing the program. GAO has substantial questions about whether Education’s interpretation of the term “reservation” is broader than the statutory definition supports. The Department of Health and Human Services (HHS) awarded funding to the Nanticoke Lenni-Lenape Indians of New Jersey and the Powhatan Renape Nation—two non-federally recognized tribes in New Jersey—under programs authorized to fund state-recognized tribes. The state of New Jersey, however, does not consider these entities to be state recognized. HHS has initiated action to enforce federal financial reporting requirements for the Accohannock Indian Tribe. The Accohannock Indian Tribe has not filed its required financial report for 2009 that was due no later than September 30, 2010. In 2009, the Accohannock Indian Tribe reported spending over $1 million in federal funds from three different federal programs administered by the department. The department sent letters of inquiry about the delinquent financial report on March 8, 2011, and more recently, after GAO inquired about the issue, on February 7, 2012. GAO recommends that Education and HHS take specific actions to ensure that they are not making grants to ineligible tribes and to enforce federal financial reporting requirements. HHS agreed. Education stated its commitment to review its practices, but disagreed with GAO’s finding on the statutory eligibility for the American Indian Vocational Rehabilitation Services Program, which is discussed more fully in the report.
Thousands of emergency departments operate in the United States, seeing millions of patients each year. In our 2003 report on emergency department crowding, we reported on the extent of crowding in metropolitan areas. Researchers have used three indicators—diversion, wait times, and boarding—in examining emergency department crowding. Between 2001 and 2006, according to NCHS estimates, the number of emergency departments operating in the United States ranged from about 4,600 to about 4,900. During the same period, the estimated number of visits to U.S. emergency departments exceeded 107 million visits each year, ranging from about 107 million visits in 2001 to about 119 million visits in 2006. (See table 1.) Most hospitals with emergency departments are located in metropolitan areas, and the majority of emergency department visits occurred in metropolitan areas of the United States. In 2006, about two-thirds of hospitals with emergency departments were located in metropolitan areas compared to about one-third in nonmetropolitan areas. In the same year, about 101 million (85 percent) of the approximately 119 million emergency department visits occurred in metropolitan areas compared to about 18 million (15 percent) visits in nonmetropolitan areas. (See fig. 1.) Patients come to the emergency department with illnesses or injuries of varying severity, referred to as acuity level. Each acuity level corresponds to a recommended time frame for being seen by a physician—for example, patients with immediate conditions should be seen within 1 minute and patients with emergent conditions should be seen within 1 to 14 minutes. In 2006, urgent patients—patients who are recommended to be seen by a physician within 15 to 60 minutes—accounted for the highest percentage of visits to the emergency department. (See fig. 2.) The expected sources of payment reported for patients receiving emergency department services also vary. For example, from 2001 through 2006 patients with private insurance accounted for the highest number and percentage of visits to the emergency department. During the same period, the percentage of uninsured patients seeking care in emergency departments ranged between 15 and 17 percent of total visits, and the percentage of patients visiting emergency departments with Medicare ranged between 14 and 16 percent. See appendix II for additional data on expected sources of payment and emergency department utilization. In 2003, using three indicators that point to situations in which crowding is likely occurring—diversion, patients leaving before a medical evaluation, and boarding—we reported that emergency department crowding varied nationwide. We also reported that crowding was more pronounced in certain types of communities, and that crowding occurred more frequently in hospitals located in metropolitan areas with larger populations, higher population growth, and higher levels of uninsurance. We reported that crowding was more evident in certain types of hospitals, such as in hospitals with higher numbers of staffed beds, teaching hospitals, public hospitals, and hospitals designated as certified trauma centers. In terms of factors that contribute to crowding, we reported that crowding is a complex issue and no single factor tends to explain why crowding occurs. However, we found that one key factor contributing to crowding was the availability of inpatient beds for patients admitted to the hospital from the emergency department. Reasons given by hospital officials and researchers we interviewed for not always having enough inpatient beds to meet demand from emergency patients included economic factors that influence hospitals’ capability to meet periodic spikes in demand and emergency department admissions competing with other admissions for inpatient beds. Other additional factors cited by researchers and hospital officials as contributing to crowding included the lack of availability of physicians and other community services—such as psychiatric services— and the fact that emergency patients are older, have more complex conditions, and have more treatment and tests provided in the emergency department than in prior years. Further, we reported that hospitals and communities had conducted a wide range of activities to manage crowding in emergency departments, but that problems with crowding persisted in spite of these efforts. These activities included efforts to expand capacity and increase efficiency in hospitals, and community activities to implement systems and rules to manage diversion. These efforts were unable to reverse crowding trends at hospital emergency departments, and we found that studies assessing the effect of these efforts were limited. Researchers use the indicators we reported on in 2003 to point to situations in which crowding is likely occurring in emergency departments. These indicators can point to when crowding is likely occurring but they also have limitations. For example, patients boarding in the emergency department can indicate that the department’s capacity to treat additional patients is diminished, but it is possible for several patients to be boarding while the emergency department has available treatment spaces to see additional patients. Table 2 provides the definition of the three indicators of emergency department crowding we reviewed in this report—diversion, wait times, and boarding—and lists the usefulness and limitations of using these indicators to gauge crowding. Regarding wait times, in our 2003 report, we used “left before a medical evaluation” as an indicator of crowding related to long wait times in an emergency department. Since we issued our report in 2003, researchers have used intervals of wait times—including the length of time to see a physician and the total length of time a patient is in the emergency department—to indicate when an emergency department is crowded. As a result, for this report, we examined wait times more broadly, including data on the time for patients to see a physician, length of stay in the emergency department, and visits in which the patient left before a medical evaluation. Researchers have developed a conceptual model to analyze the factors that contribute to emergency department crowding and develop potential solutions. This model partitions emergency department crowding into three interdependent components: input, throughput, and output. Although factors in many different parts of the health care system may contribute to emergency department crowding, the model focuses on crowding from the perspective of the emergency department. (See fig. 3.) Researchers have used the input-throughput-output model to explain the connection between factors that contribute to emergency department crowding and indicators of crowding. The three indicators of emergency department crowding—diversion, wait times, and boarding—are most directly related to the input, throughput, and output components, respectively, of the model; but the causes of these indicators can relate to other components. For example, a hospital emergency department might experience long wait times—an indicator associated with the throughput component—because of delays in patients receiving laboratory results (related to throughput) or because staff are busy caring for patients boarding in the emergency department due to a lack of access to inpatient beds (related to output). Similarly, an emergency department may divert ambulances (related to input) because the emergency department is full due to the inability of hospital staff to move admitted patients to hospital inpatient beds (related to output). We found that ambulance diversions continue, wait times have increased, and reports of boarding in hospital emergency departments persist. Articles we reviewed also reported on the effect of crowding on quality of care and on strategies proposed to address crowding. National data show that the diversion of ambulances continues to occur, but that the percentage of hospitals that go on diversion and the average number of hours hospitals spend on diversion varied by year. According to NCHS estimates, in 2003, 45 percent of U.S. hospitals reported going on diversion, and in 2004 through 2006, between 25 and 27 percent reported doing so. Of hospitals that reported going on diversion, the average number of hours they reported spending on diversion varied with an average of 276 hours in 2003 and an average of 473 hours in 2006. (See table 3.) NCHS officials provided the percentage of missing diversion data for each year, which ranged from 3.75 percent in 2003 to 29.1 percent in 2005. NCHS officials, however, were unable to provide an explanation for the variation of the percentage of hospitals going on diversion in the United States and average hours U.S. hospitals reported spending on diversion for these years. NCHS reported that hospitals in metropolitan areas spent more time on diversion than hospitals in nonmetroplitan areas in 2003 through 2004: almost half of hospitals in metropolitan areas NCHS surveyed reported spending more than 1 percent of their total operating time on diversion in 2003 through 2004, compared to 1 in 10 hospitals in nonmetropolitan areas. Some hospitals, however, reported that their state or local laws prohibit diversion. Other articles that reported on results from surveys also indicated that diversion has continued to occur in some hospitals. In 2006 and 2007, the American Hospital Association conducted surveys of community hospital chief executive officers that asked how much time hospitals spent on diversion in the previous year. The results from these surveys show that some hospitals reported going on diversion. In both American Hospital Association surveys, urban hospitals more often reported diversion hours than rural hospitals. For example, among hospitals responding to the 2006 American Hospital Association survey, about 64 percent of respondents from urban hospitals reported going on diversion, compared to about 17 percent of respondents from rural hospitals. In addition, articles reporting on emergency department crowding in California and Maryland also found that diversion continues to occur and that the time hospitals spent on diversion varied. National data from NCHS indicate that wait times in the emergency department have increased and in some cases exceeded recommended time frames. For example, the average wait time to see a physician increased from 46 minutes in 2003 to 56 minutes in 2006. Average wait times also increased for patients in some acuity levels. (See fig. 4.) For emergent patients, the average wait time to see a physician increased from 23 minutes to 37 minutes, more than twice as long as recommended for their level of acuity. For immediate, emergent, urgent, and semiurgent patients, NCHS estimates show that some patients were not seen within the recommended time frames for their acuity level. The average wait time to see a physician increased in emergency departments in metropolitan areas, and wait times were longer in emergency departments in metropolitan areas than in nonmetropolitan areas in 2006. In metropolitan-area emergency departments, the average wait time to see a physician increased from 51 minutes in 2003 to 60 minutes in 2006. In nonmetropolitan-area emergency departments, the average wait time to see a physician was estimated to be about 26 minutes in 2003 and 33 minutes in 2006. According to NCHS data, the average length of stay in the emergency department and the percentage of visits in which patients left before a medical evaluation also increased. (See table 4.) See appendix IV for additional information about wait times in the emergency department. More than 25 percent of the 197 articles we reviewed discuss the practice of boarding patients in emergency departments, and officials we interviewed noted that the practice of boarding continues. For example, in 2006 IOM reported that boarding continues to occur and has become a typical practice in hospitals nationwide, with the most boarding occurring at large urban hospitals. One article published in a peer-reviewed journal reported that it is not unusual for critically ill patients to board in the emergency department. In addition, officials we interviewed noted that the practice of boarding patients in emergency departments persists. In particular, officials from the Center for Studying Health System Change noted that boarding still occurs in emergency departments and continues to be one of the main indicators of emergency department crowding. Officials from ACEP noted that boarding continues to occur in emergency departments nationwide and remains a concern for emergency physicians and their patients. National data on the boarding of patients in the emergency department, however, have been limited. In 2006, IOM reported that hospital data systems do not adequately monitor or measure patient flow, and therefore may be limited in their ability to capture data on boarding. For example, few systems distinguish between when a patient is ready to move to another location for care and when that move actually takes place. In addition, from 2001 to 2006, NCHS did not collect data on boarding because, according to NCHS officials, data on boarding were not easily obtained from patient records. A question about emergency department boarding was added to NCHS’s NHAMCS questionnaire in 2007; however, data from this survey were not available at the time we conducted our analysis. Other articles that reported on results of surveys conducted by professional associations supported officials’ statements that boarding has been widespread. For example, in an article reporting on a 2005 ACEP survey of emergency department directors with a 30 percent response rate, 996 of the 1,328 respondents reported that they boarded patients for at least 4 hours on a daily basis and more than 200 respondents reported that they did so for more than 10 patients per day on average. Ten of the articles we reviewed and officials from ACEP and the Society for Academic Emergency Medicine whom we interviewed raised concerns about the adverse effect of diversion, wait times, or boarding on the quality of patient care, but quantitative evidence of this effect has been limited. Officials from ACEP reported that research has begun to analyze the effect of crowding on patient quality of care, and that anecdotal reports indicate patients are being harmed. Ten of the articles we reviewed discussed the effect of diversion, wait times, or boarding on quality of care. One of these articles, the 2006 IOM report, noted that ambulance diversion could lead to catastrophic delays in treatment for seriously ill or injured patients and that boarding may enhance the potential for errors, delays in treatment, and diminished quality of care. Other articles—some of which were published in peer-reviewed journals—also discussed the effect of crowding on the quality of patient care, including the following: An examination of the relationship between trauma death rates and hospital diversion, which suggested that death rates for trauma patients at two hospitals may be correlated with diversion at these hospitals. A review of 24 hospital emergency departments that suggested when an emergency department experienced an increase in the number of patients leaving before a medical evaluation, fewer patients with pneumonia at the emergency department received antibiotics within the recommended 4 hours. Information from a database of 90 hospitals that showed patients who were boarded in the emergency department for more than 6 hours before being transferred to the hospital’s intensive care unit had an almost 5 percent higher in-hospital mortality rate than those who were boarded for less than 6 hours. Five other articles reported potential associations between diversion, boarding, and wait times and decreased quality of patient care, including articles on the effect of increasing wait times for nonurgent patients in the emergency department and delayed treatment time for those patients who left before a medical evaluation. While these studies support the widely held assertion that emergency department crowding adversely affects the quality of patient care, a 2006 National Health Policy Forum report stated that the consequences of crowded emergency departments on quality of care have not been studied comprehensively and therefore little quantitative evidence is available to confirm this assumption. Officials from the Society for Academic Emergency Medicine reported that diversion, wait times, and boarding can contribute to reduced quality of care and worse patient outcomes. In addition, officials from both ACEP and the Society for Academic Emergency Medicine noted that additional studies about the effects of diversion, wait times, and boarding on quality of care are needed. Articles we reviewed, and officials and an expert we interviewed, discussed a number of strategies that have been proposed, and in some cases tested, that could decrease emergency department crowding. These strategies relate to the three interdependent components—input, throughput, and output—of the model of emergency department crowding developed by researchers. While several of these strategies have been tested, the assessment of their effects has generally been limited to one or a few hospitals and we found no research assessing these strategies on a state or national level. Table 5 outlines some strategies to address emergency department crowding and, to the extent they have been tested, the assessment of their effects on the indicators of crowding. Available information suggests that a lack of access to inpatient beds is the main factor contributing to emergency department crowding. Additionally, other factors—a lack of access to primary care, a shortage of available on- call specialists, and difficulties transferring, admitting, or discharging psychiatric patients—have also been reported as contributing to crowding. Of the 77 articles we reviewed that discussed factors contributing to crowding, 45 articles reported a lack of access to inpatient beds as a factor contributing to emergency department crowding, with 13 of these articles reporting it was the main factor contributing to crowding. (See table 6.) In addition, two individual subject-matter experts we interviewed also reported a lack of access to inpatient beds as the main factor that contributes to emergency department crowding. When inpatient beds are not available for ill and injured patients who require hospital admission, the emergency department may board them, and these patients take up extra treatment spaces and emergency department resources, leaving fewer resources available for other patients. One of the reasons that emergency departments are unable to move admitted patients to inpatient beds may be due to competition between emergency department admissions and scheduled hospital admissions— for example, for elective surgical procedures—which we also reported on in 2003. This reason was reported by 9 articles we reviewed and by officials from ACEP, the Society for Academic Emergency Medicine, the Center for Studying Health System Change, and three individual subject- matter experts whom we interviewed. In 2006, IOM reported that hospitals might prefer scheduled admissions over admissions from the emergency department because emergency department admissions are considered to be less profitable. One reason that admissions from the emergency department are considered to be less profitable is because these admissions tend to be for medical conditions, such as heart failure and pneumonia, rather than surgical procedures, such as joint replacement surgeries and scheduled cardiovascular procedures. Available data from AHRQ’s 2006 Healthcare Cost and Utilization Project show all 20 of the most-prevalent diagnosis-related groups (DRG) associated with admissions from the emergency department in 2006 were for medical conditions rather than surgical procedures. In contrast, 7 of the 20 most- prevalent DRGs for nonemergency department admissions in 2006 were for surgical conditions. Officials from the Society for Academic Emergency Medicine told us that because treating surgical conditions is considered more profitable for a hospital than treating emergency medical conditions, hospitals had an incentive to reserve beds for scheduled surgical admissions rather than to give them to patients admitted from the emergency department. Available information suggests that other factors also contribute to emergency department crowding including a lack of access to primary care, a shortage of available on-call specialists, and difficulties transferring, admitting, or discharging psychiatric patients. Twenty-two articles we reviewed reported a lack of access to primary care as a factor contributing to emergency department crowding. For example, one of these articles reported that difficulty in receiving care from a primary care provider was associated with an increase in nonurgent emergency department use. Another article described a study in New Jersey that indicated that almost one-half of all emergency department visits within the state that did not result in hospital admission could have been avoided with improved access to primary care services. Additionally, officials from the Center for Studying Health System Change and the Society for Academic Emergency Medicine mentioned a lack of access to primary care as a factor contributing to emergency department crowding. When patients do not have a primary care physician, or cannot obtain an appointment with a primary care physician, they may go to the emergency department to seek primary care services. In addition, patients who do not have access to primary care may defer care until their condition has worsened, potentially increasing the emergency department resources needed to treat the patient’s condition. These situations involve patients that could have been treated outside of the emergency department and may add to the number of patients seeking care at the emergency department. Articles we reviewed provided conflicting information on the effect of increasing numbers of uninsured patients on emergency department crowding. Five of the 22 articles that mentioned a lack of access to primary care as a factor also reported that increasing numbers of uninsured patients also contributed to emergency department crowding. For example, 1 article indicated that a reason for longer wait times at 30 California hospitals in lower-income areas was that these hospitals treat a disproportionate number of uninsured patients who may lack access to primary care. Two other articles we reviewed, however, suggested that increasing numbers of uninsured patients is not a factor contributing to crowding. For example, the Center for Studying Health System Change reported that contrary to the popular belief that uninsured people are the major cause of increased emergency department use, insured Americans accounted for most of the 16 percent increase in visits between 1996 through 1997 and 2000 through 2001. In addition, officials from AHRQ noted that a larger proportion of patients using the emergency department are insured than uninsured. Seven articles and officials from the Center for Studying Health System Change, ACEP, the American Hospital Association, and the American Medical Association whom we interviewed reported that a shortage of on- call specialists available to emergency departments is a factor that contributes to emergency department crowding. Hospitals often employ on-call specialists, meaning specialists such as neurosurgeons or orthopedic surgeons who only travel to the hospital or emergency department when needed and called. When patients wait for long periods in the emergency department for an on-call specialist who is not immediately available—for example, busy covering other hospitals or in surgery—these patients might not receive timely and appropriate care. In addition, these patients may utilize treatment spaces and resources that could be used to treat other patients, potentially crowding the emergency department. In 2006 IOM reported that over the preceding several years, hospitals had found it increasingly difficult to secure specialists for their emergency department patients. Additionally, another article reported the results of a 2007 American Hospital Association survey of hospital chief executive officers that asked about maintaining on-call specialist coverage for the emergency department. While this survey had a low response rate, it indicates that hundreds of emergency departments reported experiencing difficulty in maintaining on-call coverage for certain specialists. For example, of those chief executive officers that responded to the survey (840 chief executive officers; 17 percent of those surveyed), 44 and 43 percent noted difficulty in maintaining emergency department on-call coverage for orthopedic surgeons and neurosurgeons, respectively. Additionally, officials from the Center for Studying Health System Change told us that delays in obtaining specialty services may contribute to crowding. None of the articles we reviewed, nor officials or individual subject-matter experts we interviewed, quantitatively assessed the relationship between the availability of on-call specialists and emergency department crowding. Three articles we reviewed and officials from NCHS, ACEP, and the Center for Studying Health System Change whom we interviewed reported difficulties transferring, admitting, or discharging psychiatric patients from the emergency department as a factor contributing to emergency department crowding. One of these articles reported the results of a national ACEP survey of emergency physicians that asked about psychiatric patients in the emergency department. Of the physicians responding to the survey (328 physicians; approximately 23 percent of those surveyed), about 40 percent reported that, on average, psychiatric patients waited in the emergency department for an inpatient bed longer than 8 hours after the decision to admit them had been made, including about 9 percent who reported that psychiatric patients waited more than 24 hours. Medical patients in the emergency department—those diagnosed with nonpsychiatric conditions—generally waited less time for an inpatient bed: 7 percent of responding physicians reported that, on average, medical patients waited longer than 8 hours after the decision to admit them had been made; slightly less than 1 percent reported that the medical patients waited more than 24 hours. In addition, the survey respondents indicated psychiatric patients waiting to be transferred or discharged added to the burden of an already crowded emergency department and affected access for all patients requiring care. Also, officials from NCHS said that psychiatric patients in the emergency department are a national concern because they are frequent visitors to the emergency department and they may spend more than 24 hours in an emergency department. National data from NCHS show that, in 2006, psychiatric patients constituted a small percentage of emergency department visits but had a longer average length of stay in the emergency department. Almost 3 percent of emergency department visits in 2006 were by patients presenting with a complaint of a psychological or mental disorder and these patients had an average length of stay in the emergency department that was longer than the average length of stay for all other visits (397 minutes, compared to 194 minutes for all other visits). Emergency department patients with psychiatric disorders may need to be isolated from other patients and may require resources that are not available in many hospitals. Hospital emergency departments often have limited or no specialized psychiatric facilities and emergency department staff may experience difficulties transferring such patients to other facilities, admitting them to the hospital, or discharging them from the emergency department. Additionally, emergency department staff may spend a disproportionate amount of time and resources caring for psychiatric patients while these patients wait for transfer, admission, or discharge. Our literature review identified five other factors that may contribute to emergency department crowding. For example, in 2006 IOM reported these five factors—an aging population, increasing acuity of patients, staff shortages, hospital processes, and financial factors—as possible factors that might contribute to emergency department crowding, and these five factors were also mentioned in 14 other articles we reviewed. However, during our interviews with officials and individual subject-matter experts, there was little mentioned about these factors and how they contribute to crowding. HHS provided comments on a draft of this report, which are included in appendix V. In its comments, HHS noted that the report demonstrates that emergency department wait times continue to increase and frequently exceed national standards. HHS also commented that strengths of the report include its clarity, focus, and tone. In addition, HHS commented on the scope of the report and limitations of the indicators used in it. HHS suggested that the information provided in the report would be strengthened by inclusion of articles published prior to 2003 and articles reporting on studies conducted outside of the United States. We focused our literature review on articles published since 2003 to review information made available since we issued our 2003 report. And while articles reporting on studies conducted outside of the United States may include valuable information regarding aspects of emergency department crowding as it occurs in other countries, we reviewed articles reporting on studies conducted in the United States because our focus was on the U.S. health care system. HHS also commented that the indicators of crowding that we used had limitations. As we noted both in our 2003 report and in this report, these indicators have limitations but, in the absence of a widely accepted standard measure of crowding, they are used by researchers to point to situations in which crowding is likely occurring. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix VI. To examine national data made available since 2003 on emergency department diversion and wait times, we obtained and reviewed data collected by the National Center for Health Statistics (NCHS) through its National Hospital Ambulatory Medical Care Survey (NHAMCS). We analyzed available NCHS data for 2001 through 2006 on diversion and wait times to determine what changes, if any, have occurred over time. We analyzed wait time data by patient acuity level and hospital characteristics, such as hospital ownership, metropolitan or nonmetropolitan area location, and geographic region. We analyzed wait times in the emergency department using NCHS’s data on recommended time for a patient to see a physician based on patient acuity levels. Further, to determine the average length of stay in the emergency department for patients who presented with a psychological or mental disorder, we analyzed emergency department length of stay by the type of patient complaint at time of the visit. We also analyzed NCHS data on emergency department utilization by payer source, including Medicare, Medicaid, and the State Children’s Health Insurance Program, self pay, no charge or charity care; and by hospital characteristics, such as whether the hospital was located in a metropolitan or nonmetroplitan area, to provide context for our work. We also reviewed and analyzed data from the Agency for Healthcare Research and Quality’s (AHRQ) Healthcare Cost and Utilization Project to determine the diagnosis-related groups (DRG) most commonly associated with hospital admissions from the emergency department and most commonly associated with non-emergency department admissions—information we determined was related to factors that contribute to crowding. We obtained NCHS and AHRQ data beginning with 2001 because these data became publicly available in 2003 or later, meeting the criterion for inclusion in our analysis. Some data were not available from NCHS for all years between 2001 and 2006 because of revisions made by NCHS to questions on surveys used to collect information and because of low response rates to certain questions on these surveys. At the time we conducted our analysis, the most recent year for which data were available from NCHS and AHRQ was 2006. In this report, we present NCHS estimates; for those cases in which we report an increase or other comparison of these estimates, NCHS tested the differences and found them statistically significant. To assess the reliability of national data from NCHS and AHRQ, we interviewed agency officials and reviewed the methods they used for collecting and reporting these data. We resolved discrepancies we found between the data provided to us and data in published reports by corresponding with officials from NCHS to obtain sufficient explanations for the differences. Based on these steps, we determined that these data were sufficiently reliable for our purposes. To examine information available since 2003 about three indicators of emergency department crowding and the factors that contribute to crowding, we conducted a literature review. In examining information made available since 2003 about indicators and factors of crowding during our literature review, we analyzed articles for what was reported on the effect of crowding on patient quality of care and proposed strategies to address crowding. We conducted a structured search of 16 databases that included peer-reviewed journal articles and other periodicals to capture articles published on or between January 1, 2003, and August 31, 2008. We searched these databases for articles with key words in their title or abstract related to emergency department crowding, or indicators and factors of crowding, such as versions of the word “crowding,” “emergency department,” “diversion,” “wait time,” and “boarding.” We also included articles published on or between January 1, 2003, and August 31, 2008, that were identified as a result of our interviews with federal officials, professional and research organizations, and subject-matter experts. We also searched related Web sites for additional emergency department crowding publications, including articles reporting on surveys conducted by professional organizations, such as the American Hospital Association. For these articles, we identified the number of respondents and response rates, and for those with lower response rates, we noted them in our report. From all of these sources, we identified over 300 articles, publications and reports (which we call articles) published from January 1, 2003, through August 31, 2008. Within the more than 300 articles, we excluded articles that were published outside of the United States, reported on subjects or data from outside the United States, were only available in an abstract form, had a focus other than day-to-day emergency department operations, or were unrelated to emergency department crowding. We supplemented the articles that were not excluded from our search by reviewing references contained in the bibliography of these articles for additional articles published on or between January 1, 2003, and August 31, 2008, on emergency department crowding that met our inclusion criteria. In total, we included 197 articles in our literature review and analyzed these articles to summarize information on emergency department crowding, including information on diversion, wait times, and boarding, the effect of these indicators of crowding on quality of care, proposed strategies to decrease these indicators, and factors that contributed to emergency department crowding. To review a complete bibliography of these articles, see GAO-09-348SP. Additionally, we interviewed officials from federal agencies and one state agency, officials from professional, research, and other hospital-related organizations, and individual subject-matter experts to obtain and review information on indicators of emergency department crowding and factors that contribute to crowding. During our interviews, we asked about the effect of crowding on patient quality of care and proposed strategies for addressing crowding. We interviewed federal officials from the Department of Health and Human Services’ Centers for Medicare & Medicaid Services and the Office of the Assistant Secretary for Preparedness and Response, and officials from NCHS and AHRQ who have conducted research on emergency department utilization and crowding. We also interviewed officials from the Massachusetts Department of Public Health to discuss the state’s planned implementation of a new diversion policy in January 2009. We interviewed officials from professional organizations, including the American College of Emergency Physicians (ACEP), the American Hospital Association, the American Medical Association, the Emergency Nurses Association, the National Association of EMS Physicians, and the Society for Academic Emergency Medicine. Some officials from ACEP and the Society for Academic Emergency Medicine have published research in peer-reviewed journals. In addition, we interviewed officials from research organizations, such as the California Healthcare Foundation, the Center for Studying Health System Change, the Heritage Foundation, and the Robert Wood Johnson Foundation’s Urgent Matters. We interviewed officials from the Joint Commission (an organization involved in hospital accreditation), the Medicare Payment Advisory Commission (an organization that studies Medicare payment issues and reports to Congress), and the National Quality Forum (an organization that develops quality measures for emergency department care). We also interviewed three individual subject-matter experts who have conducted research on emergency department crowding and strategies to reduce crowding. We conducted this performance audit from May 2008 through April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on nationally-representative estimates of emergency departments and emergency department visits in the United States by characteristics such as patient acuity level, payer source, hospital ownership type, geographic region, and type of area (metropolitan or nonmetropolitan) from the National Center for Health Statistics’ (NCHS) National Hospital Ambulatory Medical Care Survey (NHAMCS). Specifically, for 2001 through 2006 this appendix presents the following information: the percentage of emergency departments by hospital ownership type, by geographic region, and by type of area (metropolitan or nonmetropolitan) (table 7); the number and percentage of emergency department visits by acuity level (figure 5) and payer source (table 8); the number and percentage of emergency department visits by hospital ownership type, geographic region, and type of area (table 9); and the number and percentage of emergency department visits that resulted in hospital admissions (table 10). Researchers continue to use diversion, wait times (including patients who left before a medical evaluation), and boarding as indicators to point to situations in which crowding is likely occurring in emergency departments; however, as we reported in our 2003 report, there is no standard measure of the extent to which emergency departments are experiencing crowding. In the absence of a widely-accepted standard measure of crowding, researchers have proposed and conducted limited testing of potential measures of crowding. During our literature review of articles on emergency department crowding published on or between January 1, 2003, and August 31, 2008, we identified proposed measures of crowding that researchers have tested, either in a single hospital setting or for a limited period of time. Table 11 describes these proposed measures. While researchers have claimed varying levels of success using these measures to gauge crowding, we found no widely accepted measure of emergency department crowding, and that none of these measures of crowding had been widely implemented by researchers and health care practitioners. This appendix provides information on nationally-representative estimates of intervals of emergency department wait times in the United States: wait time to see a physician, length of stay in the emergency department, and the percentage of visits in which patients left before a medical evaluation. Specifically, this appendix presents the following information from the National Center for Health Statistics’ (NCHS) National Hospital Ambulatory Medical Care Survey (NHAMCS): for 2003 through 2006 (the only years for which data were available from NCHS), the percentage of emergency department visits by wait time to see a physician (table 12), average and median wait times to see a physician by patient acuity level (figure 6), average wait times to see a physician by payer type, hospital type, and geographic region (table 13), and average wait times by the hospitals’ percentage of visits in which patients left before a medical evaluation (table 14); and for 2001 through 2006, the percentage of visits by emergency department length of stay (table 15), the average and median length of stay by patient acuity level (figure 7), the average length of stay in the emergency department by payer type, hospital type, and geographic region (table 16); and average length of stay by the hospitals’ percentage of visits in which patients left before a medical evaluation (table 17). In addition to the contact named above, Kim Yamane, Assistant Director; Danielle Bernstein; Susannah Bloch; Ted Burik; Aaron Holling; Carla Jackson; Ba Lin; Jeff Mayhew; Jessica Smith; and Jennifer Whitworth made key contributions to this report.
Hospital emergency departments are a major part of the nation's health care safety net. Of the estimated 119 million visits to U.S. emergency departments in 2006, over 40 percent were paid for by federally-supported programs. These programs--Medicare, Medicaid, and the State Children's Health Insurance Program--are administered by the Department of Health and Human Services (HHS). There have been reports of crowded conditions in emergency departments, often associated with adverse effects on patient quality of care. In 2003, GAO reported that most emergency departments in metropolitan areas experienced some degree of crowding (Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities, GAO-03-460). For example, two out of every three metropolitan hospitals reported going on ambulance diversion--asking ambulances to bypass their emergency departments and instead transport patients to other facilities. GAO was asked to examine information made available since 2003 on emergency department crowding. GAO examined three indicators of emergency department crowding--ambulance diversion, wait times, and patient boarding--and factors that contribute to crowding. To conduct this work, GAO reviewed national data; conducted a literature review of 197 articles; and interviewed officials from HHS and professional and research organizations, and individual subject-matter experts. Emergency department crowding continues to occur in hospital emergency departments according to national data, articles we reviewed, and officials we interviewed. National data show that hospitals continue to divert ambulances, with about one-fourth of hospitals reporting going on diversion at least once in 2006. National data also indicate that wait times in the emergency department increased, and in some cases exceeded recommended time frames. For example, the average wait time to see a physician for emergent patients--those patients who should be seen in 1 to 14 minutes--was 37 minutes in 2006, more than twice as long as recommended for their level of urgency. Boarding of patients in the emergency department who are awaiting transfer to an inpatient bed or another facility continues to be reported as a problem in articles we reviewed and by officials we interviewed, but national data on the extent to which this occurs are limited. Moreover, some of the articles we reviewed discussed strategies to address crowding, but these strategies have not been assessed on a state or national level. Articles we reviewed and individual subject-matter experts we interviewed reported that a lack of access to inpatient beds continues to be the main factor contributing to emergency department crowding, although additional factors may contribute. One reason for a lack of access to inpatient beds is competition between hospital admissions from the emergency department and scheduled admissions--for example, for elective surgeries, which may be more profitable for the hospital. Additional factors may contribute to emergency department crowding, including patients' lack of access to primary care services or a shortage of available on-call specialists. In commenting on a draft of this report, HHS noted that the report demonstrates that emergency department wait times are continuing to increase and frequently exceed national standards. HHS also provided technical comments, which we incorporated as appropriate.
While IT can enrich people’s lives and improve organizational performance, we have previously reported that federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. For example, in July 2013, we testified that the federal government continued to spend billions of dollars on troubled IT investments and we identified billions of dollars worth of failed and challenged IT projects. OMB plays a key role in overseeing how federal agencies manage their IT investments by working with them to better plan, justify, and determine how to manage them. Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. OMB also guides agencies in developing sound business cases for IT investments and establishing management processes for overseeing these investments throughout their life cycles. The scope of this undertaking is quite large: in planning for fiscal year 2014, 27 federal agencies reported plans to spend about $82 billion on IT.investments, $37.6 billion is to be spent on non-major IT investments, and $5.5 billion is to be spent on classified Department of Defense IT investments. Of that, $38.7 billion is to be spent on major IT Within OMB, the Office of E-Government and Information Technology, headed by the Federal CIO, directs the policy and strategic planning of federal IT investments and is responsible for oversight of federal information technology spending. In June 2009, OMB deployed a public Web site to further improve the transparency and oversight of agencies’ IT investments. Known as the IT Dashboard,performance data for over 700 major federal IT investments at 27 federal agencies, accounting for $38.7 billion of those agencies’ planned $82 billion budget for fiscal year 2014. According to OMB, these data are intended to provide a near-real-time perspective on the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold the government agencies accountable for progress and results. this site displays federal agencies’ cost, schedule, and The Dashboard visually presents performance ratings for agencies and for individual investments using metrics that OMB has defined—cost, schedule, and CIO evaluation. The Web site also provides the capability to download certain data. Figure 1 is an example of VA’s portfolio page as recently depicted on the Dashboard. (The ratings are explained in the narrative following the figure.) The Dashboard’s data spans the period from its June 2009 inception to the present, and is based, in part, on agency assessments of individual investment performance and each agency’s budget submissions to OMB. The cost and schedule data agencies are required to submit to the Dashboard have changed over time, as have the related calculations. For example, in response to our recommendations (further discussed in the following section), OMB changed how the Dashboard calculates the cost and schedule ratings in July 2010 to include “in progress” milestones rather than just “completed” ones for a more accurate reflection of current investment status. While the required cost and schedule data have changed, the thresholds for assigning cost and schedule ratings have remained constant over the life of the Dashboard. Specifically, the Dashboard assigns rating colors (red, yellow, green) based on agency- submitted cost and schedule variances, as shown in table 1. Unlike the variance-based cost and schedule ratings, the Dashboard’s “Investment Evaluation by Agency CIO” (also called the CIO rating) is determined by agency officials. According to OMB’s instructions, each agency CIO is to assess his or her IT investments against a set of six pre- established evaluation factors identified by OMB (shown in table 2) and then assign a rating of 1 (high risk) to 5 (low risk) based on the CIO’s best judgment of the level of risk facing the investment. OMB recommends that CIOs consult with appropriate stakeholders in making their evaluation, including Chief Acquisition Officers, program managers, and other interested parties. According to an OMB staff member, agency CIOs are responsible for determining appropriate thresholds for the risk levels and for applying them to investments when assigning CIO ratings. OMB requires agencies to update these ratings as soon as new information becomes available which will affect an investment’s assessment. After agencies assign a level of risk to each investment, the Dashboard assigns colors to CIO ratings according to a five-point scale, as illustrated in table 3. An OMB staff member from the Office of E-Government and Information Technology noted that the CIO rating should be a current assessment of future performance based on historical results and is the only Dashboard performance indicator that has been defined and produced the same way since the Dashboard’s inception. Furthermore, OMB issued guidance in August 2011other things, that agency CIOs shall be held accountable for the performance of IT program managers based on their governance process and the data reported on the IT Dashboard, which includes the CIO rating. According to OMB, the addition of CIO names and photos on Dashboard investments is intended to highlight this accountability and link it to the Dashboard’s reporting on investment performance. We have previously reported that OMB has taken significant steps to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, and by improving the accuracy of investment ratings. We also found issues with the accuracy and data reliability of cost and schedule data, and recommended steps that OMB should take to improve these data. In July 2010, we reportedOMB’s Dashboard were not always accurate for the investments we that the cost and schedule ratings on reviewed, because these ratings did not take into consideration current performance. As a result, the ratings were based on outdated information. We recommended that OMB report on its planned changes to the Dashboard to improve the accuracy of performance information and provide guidance to agencies to standardize milestone reporting. OMB agreed with our recommendations and, as a result, updated the Dashboard’s cost and schedule calculations to include both ongoing and completed activities. Similarly, our report in March 2011 noted that OMB had initiated several efforts to increase the Dashboard’s value as an oversight tool, and had used its data to improve federal IT management. We also reported, however, that agency practices and the Dashboard’s calculations contributed to inaccuracies in the reported investment performance data. For instance, we found missing data submissions or erroneous data at each of the five agencies we reviewed, along with instances of inconsistent program baselines and unreliable source data. As a result, we recommended that the agencies take steps to improve the accuracy and reliability of their Dashboard information, and that OMB improve how it rates investments relative to current performance and schedule variance. Most agencies generally concurred with our recommendations and three have taken steps to address our recommendation. OMB agreed with our recommendation for improving ratings for schedule variance. It disagreed with our recommendation to improve how it reflects current performance in cost and schedule ratings, but more recently made changes to Dashboard calculations to address this while also noting challenges in comprehensively evaluating cost and schedule data for these investments. We also reported on OMB’s guidance to agencies for reporting their IT investments in September 2011 and found that it did not ensure complete reporting. Specifically, agencies differed on what investments they included as an IT investment. Among other things, we recommended that OMB clarify its guidance on reporting IT investments to specify whether certain types of systems—such as space systems—are to be included. OMB did not agree that further efforts were needed to clarify reporting in regard to the types of systems. Subsequently, in November 2011, we noted that the accuracy of investment cost and schedule ratings had improved since our July 2010 report because OMB had refined the Dashboard’s cost and schedule calculations. Most of the ratings for the eight investments we reviewed as part of our November 2011 report were accurate, although we noted that more could be done to inform oversight and decision making by emphasizing recent performance in the ratings. We recommended that the General Services Administration comply with OMB’s guidance for updating its ratings when new information becomes available (including when investments are rebaselined). The agency concurred and has since taken actions to address this recommendation. Since we previously recommended that OMB improve how it rates investments, we did not make any further recommendations. More recently, in October 2012 we reported that CIOs at six agencies rated a majority of investments listed on the federal IT Dashboard as low or moderately low risk from June 2009 through March 2012. Additionally, two agencies, the Department of Defense and the National Science Foundation, rated no investments as high or moderately high risk during this time period. Agencies generally followed OMB’s instructions for assigning CIO ratings, although the Department of Defense’s ratings were unique in reflecting additional considerations, such as the likelihood of OMB review. Most of the selected agencies reported various benefits associated with producing and reporting CIO ratings, such as increased quality of their performance data and greater transparency and visibility of investments. We recommended that OMB analyze agencies’ investment risk over time as reflected in the Dashboard’s CIO ratings and present its analysis with the President’s annual budget submission, and that the Department of Defense ensure that its CIO ratings reflect available investment performance assessments and its risk management guidance. Both agencies concurred with our recommendations, and OMB reported on CIO rating trends in the fiscal year 2014 budget submission; however, the Department of Defense has not updated any of its ratings since September 2012. Further, we studied OMB’s efforts to help agencies address IT projects with cost overruns, schedule delays, and performance shortfalls in June 2013. In particular, we reported that OMB used CIO ratings from the Dashboard, among other sources, to select at- risk investments for reviews known as TechStats. OMB initiated these reviews in January 2010 to further improve investment performance, and subsequently incorporated the TechStat model into its 25-point plan for reforming federal IT management. We reported that OMB and selected agencies had held multiple TechStat sessions but additional OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects and that resulting cost savings were valid. Among other things, we recommended that OMB require agencies to address their highest-risk investments and to report on how they validated the outcomes. OMB generally agreed with our recommendations, and stated that it and the agencies were taking appropriate steps to address them. As of August 2013, CIOs at the eight selected agencies rated 198 of their 244 major investments listed on the Dashboard as low risk or moderately low risk. Of the remaining 46 investments, 41 were rated medium risk, and 5 were rated high risk or moderately high risk. Historically, over the life of the Dashboard from June 2009 to August 2013, low or moderately low risk investments accounted for an average of 72 percent of all ratings at the eight agencies, medium risk ratings an average of 22 percent, and high risk or moderately high risk ratings an average of 6 percent. The CIO ratings and associated number of investments at each the eight agencies as reported on the Dashboard over the past 4 years are presented in figure 2. When comparing the investments’ first and most recent ratings, the agencies we reviewed showed generally positive results. Specifically, of the 383 total investments listed on the Dashboard from June 2009 to August 2013 for the selected agencies, 121 increased (meaning risk is lower) and 81 decreased (meaning risk is higher). Additionally, a significant portion of investments had returned to their original rating (74) and about one third of investments’ ratings had never changed (107). (See fig. 3.) When considered individually, five of the eight selected agencies— Agriculture, Commerce, Energy, Treasury, and VA—had more investments’ ratings increase than decrease, whereas the other three— Justice, Transportation, and SSA—had more investments decrease. Figure 4 presents the net changes in investments’ ratings for each agency during the reporting period of June 2009 to August 2013. The agencies cited additional oversight or program reviews as factors that contributed to improved ratings. Furthermore, Agriculture and Commerce both attributed improved ratings to enhanced timeliness and sufficiency of investment reporting. Both of these agencies factor investment reporting into CIO ratings and increased ratings as better performance and risk information was provided by investment management. This suggests that caution should be exercised when interpreting changing risk levels for investments, as rating changes by agencies may not be entirely due to investment performance. Agencies also commented that the CIO ratings and Dashboard reporting had spurred improved investment management and risk mitigation. Additionally, the total number of investments that the eight agencies reported on the Dashboard has varied over time, which impacts the number of investments receiving CIO ratings (see fig. 5). The variation in the number of investments reported on the Dashboard can be attributed, in part, to agencies’ ability to add, downgrade, and remove investments throughout the annual federal budget process. However, as we concluded in September 2011, OMB’s guidance also did not ensure complete reporting of IT investments by agencies. Specifically, we found that OMB’s definition of an IT investment is broad, and the 10 agencies we evaluated differed on what systems they included as IT investments. For example, 5 of the 10 agencies we reviewed consistently considered investments in research and development systems as IT, and 5 did not. Consequently, we recommended that OMB clarify its guidance on reporting IT investments to specify whether certain types of systems—such as space systems—were to be included. OMB did not agree that further efforts were needed to clarify reporting in regard to the types of systems. Now, 2 years later, and given the continuing lack of clarity in investment reporting guidance, agencies have elected to withdraw investments from the Dashboard that are clearly IT. Because we have continued to identify inappropriate reclassifications of IT investments, we continue to believe this recommendation has merit. For example, as part of the most recent budget process, Energy officials reported several of Energy’s supercomputer investments as facilities rather than IT, thus removing those investments from the Dashboard and accounting for a portion of the recent decrease in investments. Energy officials also stated that OMB approved this change. These investments account for $368 million, or almost 25 percent, of Energy’s fiscal year 2012 IT spending, and include the National Nuclear Security Administration’s Sequoia system, which Energy reported as the most powerful computing system in the world as of June 2012. According to Energy officials, these investments were recategorized because they include both supercomputers and laboratory facilities (which are not IT). As another example, the Deputy Secretary of Commerce issued a directive in September 2012 which resulted in the removal of Commerce’s satellite investments from the Dashboard. As we found in 2011, Commerce only reported the ground systems of a spacecraft as IT investments, and not the technology components on the spacecraft itself. With this directive, Commerce removed the ground-based portions from IT investment reporting as well, accounting for $447 million, or 26 percent, of the department’s fiscal year 2012 IT spending. In making this decision, Commerce determined that it needed to refocus oversight efforts to a more appropriate level and consequently minimized the role of the CIO and others in the oversight of satellites. A Commerce official stated that Commerce plans to exclude all such investments from the department’s fiscal year 2015 IT budget submission. Clinger-Cohen Act of 1996 (40 U.S.C. § 11101(6)). and transmit satellite data. A staff member from the Office of E- Government stated that OMB could not stop agencies from making such recategorizations and that OMB had no control over such agency decisions. By gathering incomplete information on IT investments, OMB increases the risk of not fulfilling its oversight responsibilities, of agencies making inefficient and ineffective investment decisions, and of Congress and the public being misinformed as to the performance of federal IT investments. As part of the budget process, OMB is required to analyze, track, and evaluate the risks and results of all major IT investments. The Dashboard gives the public access to the same tools and analysis that the government uses in conducting this performance oversight. According to OMB, Dashboard data are intended to provide a near-real-time perspective on the performance of major investments. http://www.itdashboard.gov/faq. year, OMB decreases the utility of the Dashboard as a tool for greater IT investment oversight and transparency. Of the 80 selected investments at the eight agencies we reviewed, 53 of the CIO ratings were consistent with the risks portrayed in the supporting investment performance documents, 20 were partially consistent, and 7 VA investments were inconsistent. Those that were partially consistent had few instances of discrepancies during the 12-month period. For example, both of Commerce’s inconsistent investments had a discrepancy during 1 of the 12 months which we reviewed. As previously mentioned, a CIO rating should reflect the level of risk facing an investment on a scale from 1 (high risk) to 5 (low risk) relative to that investment’s ability to accomplish its goals. However, seven of the eight agencies we reviewed had at least one instance wherein the Dashboard’s CIO ratings did not consistently reflect the risks identified in agency documents. Table 4 summarizes our assessment of the 10 investments at each of the selected agencies during the 12-month period from January 2012 through December 2012 and a discussion of the analysis of each agency’s documentation follows the table. Agriculture: Eight of the 10 investments at Agriculture had some inconsistencies with reported risks. In particular, Agriculture experienced technical issues uploading its ratings to the Dashboard in early 2012, which impacted the ratings of six investments. Agriculture officials identified an issue with these investments after observing that submitted changes were not reflected on the Dashboard. They addressed this issue by working with their contractor to determine the cause of the issue, modifying their process to prevent the issue from recurring, and establishing procedures to identify future technical issues. Additionally, Agriculture’s Financial Management Modernization Initiative—one of the six investments that experienced the technical issue described above—and Human Resources Line of Business’s October ratings should have been lower (higher risk) in October 2012. Officials stated that these CIO ratings were updated at the beginning of November 2012 and attributed the delays in the rating submissions to the annual budget process. Additionally, Agriculture’s CIO rated the Farm Program Modernization investment as moderately low risk until November 2012, despite the investment showing indications of significantly increased risk as early as May 2012. For example, the number of high and very high risks increased every month in a series of briefings from May to July 2012, at which point there was 1 high-impact risk and 2 very high-impact risks each with an estimated 70 percent probability of occurring. Further, in September 2012 a senior management oversight committee determined the investment to be off-track and unable to resolve the issues. However, the investment remained rated on the Dashboard as moderately low risk until November 2012, when it was updated to medium risk. As a result, the CIO rating did not reflect the results of the oversight committee’s review for 3 months. The investment was ultimately given a high-risk rating in December 2012. Commerce: Two of the 10 selected Commerce investments differed from risk levels identified in underlying documentation for 1 month each. In particular, Commerce’s Census - Economic Census and Surveys investment should have been green rather than yellow in March 2012, and its National Oceanic and Atmospheric Administration IT Infrastructure should have been yellow rather than green in June 2012. In both cases Commerce officials recognized that there were problems and took steps to correct them the following month. Energy: Five of the 10 selected Energy investments remained listed on the Dashboard after the department had recategorized them as investment types that would have no longer been displayed. Specifically, in October 2012, as part of its annual budget process, Energy downgraded its SR Mission Support Systems from major to non-major and, as discussed previously, changed four of the selected supercomputer investments from IT to non-IT. An SR Mission Support Systems official explained that Energy downgraded the investment after realizing that significant portions of it could be interpreted as non-IT under the Federal Acquisition Regulation, and the remainder did not meet the department’s threshold for being a major IT investment. Further, although Energy made these decisions in October 2012, OMB officials explained that they do not publish budgetary data such as these until the release of the President’s budget, which did not happen until April 2013 at which time the Dashboard reflected Energy’s changes. Justice: One of the 10 selected Justice investments remained on the Dashboard after the department recategorized it as an investment type that would have no longer been displayed, similar to Energy’s situation. Specifically, Justice downgraded its Law Enforcement Wireless Communications investment from major to non-major as part of its annual budget process in July 2012, but the Dashboard did not reflect the change until April 2013. As noted above, OMB officials explained that they do not publish budgetary data until the release of the President’s budget, which occurred in April 2013. Transportation: There were no inconsistencies between CIO ratings and reported investments’ risks. Continuing to report consistent and timely data should help ensure that Transportation’s Dashboard’s CIO ratings are accurate. Treasury: We did not identify inconsistencies between CIO ratings and reported investments’ risks. Such attention to reporting consistent and timely data should help ensure the accuracy of the Dashboard’s CIO ratings. VA: Seven of the 10 selected investments were never updated during our evaluation period, and 3 were updated once. VA did not update its ratings because it did not have the ability to automatically submit ratings from September 2011 to March 2013, a period of 19 months. Instead, VA elected to build the capability to submit ratings to the Dashboard, rather than purchase this capability. VA completed development of this tool in March 2013, at which point it resumed making the required updates to CIO ratings on the Dashboard. For the 3 that VA updated during this period, VA used a manual budget process to update their ratings in September 2012. SSA, Information Resources Management Strategic Plan: FY 2012-2016 (May 2012). and schedule variances, prevent the agency from monitoring year-to- year investment performance, and make it difficult to estimate and understand life-cycle costs. While this only affected one investment we reviewed, it has the potential to impact SSA’s entire IT investment portfolio. As described in our Cost Estimating and Assessment Guide, the purpose of such baseline changes should be to restore management’s control of the remaining effort by providing a meaningful basis for performance management. Further, these changes should be infrequent, and recurrent changes may indicate that the scope is not well understood or simply that program management is unable to develop realistic estimates.SSA, frequent rebaselining increases the risk that its investments have undetected cost or schedule variances that will impact the success of the associated investment. While agencies experienced several issues with reporting the risk of their investments, such as technical problems and delayed updates to the Dashboard, the CIO ratings were mostly or completely consistent with investment risk at seven of the eight selected agencies. Additionally, the agencies had already addressed several of the discrepancies that we identified. The final agency, VA, did not update 7 of its 10 selected investments because it elected to build, rather than buy, the ability to automatically update the Dashboard, and has now resumed updating all investments. To their credit, agencies’ continued attention to reporting the risk of their major investments supports the Dashboard’s goal of providing transparency and oversight of federal IT investments. Nevertheless, the rating issues that we identified with performance reporting and annual baselining, some of which are now corrected, serve to highlight the need for agencies’ continued attention to the timeliness and accuracy of submitted information, in order to allow the Dashboard to continue to fulfill its stated purpose. We have previously concluded that, consistent with government and industry best practices, including our own guidance on IT investment management, agencies’ highest-risk investments should receive additional management attention. In particular, agency leaders should use data-driven reviews as a leadership strategy to drive performance improvement. Correspondingly, OMB requires reviews, known as TechStat sessions, to enable the federal government to intervene to turnaround, halt, or terminate IT projects that are failing or are not producing results. Of the 80 investments we reviewed, there were 8 investments at four of the eight selected agencies that received a red (high or moderately high risk) rating in 2012. See figure 6 for a list of those at-risk investments and the associated CIO ratings. Of the eight investments receiving a red rating in 2012, the agencies reviewed seven using tools such as TechStat sessions,investment review boards, and other high-level reviews. Each of these resulted in action items intended to improve performance. The final investment, Agriculture’s Human Resources Line of Business: Service department Center was scheduled to be reviewed using a TechStat session, but in August 2013 officials from Agriculture’s Office of the CIO stated that this investment was in the process of going through a rebaseline. As noted earlier, such changes should be infrequent and reinforce the need for agencies to review the highest-risk investments to ensure that root causes of baseline changes are effectively addressed. Each of these agencies we reviewed had similar approaches to identifying and conducting these reviews. The agencies with red-rated investments in 2012—Agriculture, Commerce, Justice, and Transportation—monitored investment performance, identified troubled investments using a variety of criteria, reviewed investments in need of attention, and assigned and tracked action items. For example, both Agriculture and Commerce identified investments for review using the Dashboard’s CIO ratings. All four agencies further ensured that investments implemented corrective actions resulting from management reviews. For example, as a result of its November 2012 TechStat, Agriculture leaders assigned 17 action items to the Farm Program Modernization investment, which were to be completed by January 2013. Additionally, the four remaining agencies without red-rated investments— Energy, Treasury, VA, and SSA—have documented processes which provide comparable monitoring and oversight capabilities. For example, VA’s process relies on missed deliverable dates as the key requirement to hold a review, after which senior management makes a determination as to the future of the investment. Alternatively, Energy reviews all major IT investments on a quarterly basis and requires those which fall outside expected thresholds to complete corrective actions. As such, all of the agencies have focused management attention on troubled investments and once identified, have established and followed through on clear action items to turn around or terminate such investments. Since its inception in 2009, the IT Dashboard has increased the transparency of the performance of major federal IT investments. Its CIO ratings, in particular, have improved visibility into changes in the risk level of investments over time, as well as agencies’ ability to accomplish their goals. To that end, over the past 4 years, the agencies we reviewed have reported lower risk for more than one quarter of all their investments. However, the effectiveness of the Dashboard depends on the quality of the information that agencies submit. We previously recommended that OMB clarify guidance on whether certain types of systems should be included in IT reporting, but OMB did not agree. Agencies’ subsequent decisions to remove investments from the Dashboard by changing investments’ categorizations represent a troubling trend toward decreased transparency and accountability. Thus, we continue to believe that our prior recommendation remains valid and should be implemented. Additionally, OMB’s annual freeze of the Dashboard for as long as 8 months negates one of the primary purposes of this valuable tool—to facilitate transparency and oversight of the government’s billions of dollars in IT investments. Beyond the transparency they promote, CIO ratings present an opportunity to improve the data and processes agencies use to assess investment risk. Each of the agencies we examined had established such processes, and most of the resulting Dashboard ratings were consistent with underlying documentation. While two agencies, Transportation and Treasury, had ratings that accurately reflected their investments’ supporting documentation, other agencies’ Dashboard ratings were inconsistent with the actual risk of the associated investment. These inconsistencies—such as inaccurate reflection of negative performance, questionable decisions about the recategorization of investments, and too-frequent changes to performance baselines—highlight the continued need for agencies to populate the Dashboard with an accurate representation of the full breadth and health of their investments. In providing this information, agencies will help the Dashboard accomplish its goal of providing transparency and oversight of millions of dollars in federal IT investments. To better ensure that the Dashboard provides meaningful ratings and reliable investment data, we are recommending that the Director of OMB direct the Federal CIO to make accessible regularly updated portions of the public version of the Dashboard (such as CIO ratings) independent of the annual budget process. In addition, to better ensure that the Dashboard provides accurate ratings, we are making three recommendations to the heads of three of the selected agencies. Specifically, we are recommending that: The Secretary of Commerce direct the department CIO to ensure that the department’s investments are appropriately categorized in accordance with existing statutes and that major IT investments are included on the Dashboard. The Secretary of Energy direct the department CIO to ensure that the department’s investments are appropriately categorized in accordance with existing statutes and that major IT investments are included on the Dashboard. The Commissioner of the SSA direct the CIO to revise the agency’s investment management processes to ensure that they are consistent with the baselining best practices identified in our published guidance on cost estimating and assessment. We received comments on a draft of this report from OMB and all eight departments and agencies in our review. OMB neither agreed nor disagreed with our recommendation; Agriculture agreed with the report’s findings; Commerce disagreed with our recommendation; Energy concurred with our recommendation with one exception related to supercomputers; Justice, Treasury, and Transportation neither agreed nor disagreed with the report’s findings; VA agreed with the report’s findings; and SSA agreed with our recommendation. Each agency’s comments are discussed in more detail below. In written comments, the Federal CIO neither agreed nor disagreed with our recommendation for OMB to make accessible regularly updated portions of the public version of the Dashboard (such as CIO ratings) independent of the annual budget process. However, OMB also noted that the manner in which agencies submit Dashboard information to OMB makes it difficult to separate materials it believes are predecisional or deliberative. Despite this, OMB agreed to explore whether it would be practical to make the Dashboard more accessible, and to consider how it could separate predecisional or deliberative materials. We support OMB in its efforts to increase public transparency and accountability. OMB continued to disagree with the 2011 recommendation that we believe continues to have merit, related to the clarity of its guidance on whether certain types of systems should be included in IT reporting. OMB believes that the existing guidance is appropriate and does not have plans to review it. However, as noted earlier in this report, we believe that the recommendation is appropriate because the existing guidance does not address key categories of IT investments where we continued to find inconsistencies. OMB also disputed our characterization of two issues. First, OMB noted that up-to-date Dashboard information is available to the agencies and OMB during the budget development period, who use it as a tool for investment oversight and decision making. While we acknowledge that OMB and federal agencies continue to have access to the Dashboard during the budget process, the public display of these data is intended to allow other oversight bodies, including Congress and the general public, to hold government agencies accountable for progress and results. Secondly, OMB stated that it began using the IT Dashboard to help identify at-risk investments starting with its launch in June 2009, rather than 2010. We modified the report to reflect OMB’s statement. OMB also provided technical comments, which we have incorporated in the report as appropriate. OMB’s written comments are provided in appendix III. In written comments, Agriculture’s CIO stated that the department concurred with the content of the report and had no comments. Agriculture’s written comments are provided in appendix IV. In written comments, Commerce agreed with most of the report’s findings but disagreed with our recommendation to ensure that the department’s investments are appropriately categorized in accordance with existing statutes and that major IT investments are included on the Dashboard. Specifically, Commerce stated that although it is no longer reporting publicly on its satellite ground system investments through the Dashboard, neither the department CIO nor OMB has relinquished its oversight role. Moreover, Commerce stated that it is reviewing its satellites more frequently and in more detail; as an example, the department noted that Commerce’s CIO conducts monthly Dashboard-like assessments that cover the status of the satellite investments. However, regardless of these additional efforts, the removal of the satellite investments from the Dashboard prevents the public display of these data intended to allow OMB and other oversight bodies, including Congress, to hold the government agencies accountable for progress and results. Additionally, as discussed in this report, Commerce’s recategorization of its satellite ground system investments is contrary to the definition of IT as set forth in the Clinger-Cohen Act. Based on these facts, we continue to believe that our recommendation to Commerce that its CIO ensure that the department’s investments are appropriately categorized in accordance with existing statutes and that major IT investments are included on the Dashboard has merit and should be implemented. Commerce’s written comments are provided in appendix V. The department also provided technical comments related to our characterization of the department’s rating process, which we have incorporated in the report as appropriate. In written comments, Energy concurred, but with a comment regarding our recommendation to ensure that the department’s investments are appropriately categorized in accordance with existing statutes and that major IT investments are included on the Dashboard. Specifically, while agreeing that Energy should ensure that its IT investments are appropriately categorized, the department stated that it would continue to report investments that it believes should be categorized and managed on the Dashboard, noting that supercomputers would be an exception to this policy. However, this exception runs contrary to the finding on which this recommendation is based, namely that Energy removed $368 million in investments from the Dashboard, including a supercomputer reported in 2012 as the most powerful in the world. While Energy agreed to partner with OMB to develop an alternate mechanism to track and report supercomputer investments to OMB, it is not clear whether this information will be publicly available. Additionally, Energy’s recategorization of its supercomputer investments is contrary to the definition of IT as set forth in the Clinger-Cohen Act, as discussed in this report. Therefore, we disagree that the department’s planned actions adequately address existing requirements for open and transparent reporting of major IT investments, and we stand by our assessment that Energy’s CIO should ensure that the department’s investments are appropriately categorized in accordance with existing statutes and that major IT investments are included on the Dashboard. Energy’s written comments are provided in appendix VI. In comments provided via e-mail on November 6, 2013, a representative of Justice’s Justice Management Division stated that the department had no comments. In comments provided via e-mail on November 1, 2013, Transportation’s Deputy Director of Audit Relations stated that the department had no comments. The department also provided technical comments, which we have incorporated in the report as appropriate. In written comments, the Treasury’s CIO stated that the department had no comments. Treasury’s written comments are provided in appendix VII. In written comments, VA’s Chief of Staff stated that the department generally agreed with the findings of the report and provided general comments in which it confirmed that issues identified in the report had been addressed. Specifically, VA confirmed that the department has implemented the capability to automatically submit CIO ratings to the Dashboard and now uses more specific data to generate the CIO rating, such as the number of “red flags” associated with an investment or the number of TechStat reviews held. Finally, while stating that the department will continue its efforts to improve timely and accurate reporting to the Dashboard, VA also noted that it is important to recognize that the department manages investment delivery at the project level in comparison to the investment level found in the Dashboard. VA’s written comments are provided in appendix VIII. In written comments, SSA’s Deputy Chief of Staff agreed with our recommendation to revise the agency’s investment management processes to ensure that they are consistent with the baselining practices, and discussed planned actions to address it. SSA’s written comments are provided in appendix IX. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees; the Secretaries of Agriculture, Commerce, Energy, Transportation, Treasury, Veterans Affairs, Attorney General of the United States, Acting Commissioner of Social Security, the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Our objectives for this engagement were to (1) characterize the Chief Information Officer (CIO) ratings for selected federal agencies’ information technology (IT) investments as reported on the Dashboard over time, (2) determine the extent to which selected agencies’ CIO ratings are consistent with reported investment risk, and (3) determine the extent to which selected agencies are addressing at-risk investments. To address our first objective, we selected the eight agencies with the most reported major IT spending in fiscal year 2012, after excluding agencies reviewed in our most recent Dashboard report. The eight agencies are the Departments of Agriculture (Agriculture), Commerce (Commerce), Energy (Energy), Justice (Justice), Transportation (Transportation), the Treasury (Treasury), and Veterans Affairs (VA), and the Social Security Administration (SSA). The results in this report represent only these agencies. We downloaded and examined the Dashboard’s CIO ratings for all major investments at the eight agencies (a total of 383 investments reported by these agencies). To characterize the numbers and percentages of major IT investments at each risk level at each of our subject agencies, we analyzed, summarized, and—where appropriate—graphically depicted average CIO ratings for investments by agencies over time during the period from June 2009 to August 2013. Specifically, we compared each investment’s first and last CIO ratings (including those investments that were not on the Dashboard at its inception, and those that were downgraded or eliminated) and summarized the data by agency. To describe whether CIO ratings indicated higher or lower investment risk over time, we calculated the numbers and percentages of investments (by agency and collectively for all the agencies) that maintained a constant rating over the entire performance period, and those that experienced a change to their CIO rating in at least one rating period. Then we analyzed the subset of investments that experienced at least one changed rating and compared the first CIO rating with the latest CIO rating (no later than August 2013) to determine the numbers and percentages of investments (by agency and collectively for all the agencies) that experienced a net rating increase, a net rating decrease, or no net change. We also reviewed investments which had been removed from the Dashboard due to recategorization and compared their definitions to the Clinger-Cohen Act’s definition of IT. We presented our results to each agency and the Office of Management and Budget (OMB) and solicited their input, explanations for the results, and additional corroborating documentation, where appropriate. To accomplish the second objective, we selected the top 10 major investments at each of the eight selected agencies (for a total of 80 investments), with the highest reported IT spending in fiscal year 2012. We reviewed investment documentation (such as executive-level briefings, operational analyses, and TechStat results) and interviewed officials from each of the agencies to understand how they rate and monitor investments. of investment risk and performance (such as cost and schedule variances), which could impact the success of the associated investment. We then organized the results by month, and compared these results to the relevant investment Dashboard CIO ratings for calendar year 2012. Within these artifacts, we identified representations We then evaluated whether each investment’s monthly CIO rating was consistent with reported investment risks, and categorized each investment by the number of months which were inconsistent. Investments which were consistent with underlying documentation for every month in 2012 were categorized as “consistent,” those which were inconsistent for 1 or more months were categorized as “partially inconsistent,” and those with inconsistencies in each month were “inconsistent.” Because we were only evaluating the consistency of Dashboard ratings with reported risk, we did not evaluate the accuracy of the investment documentation. moderately high) rating on the Dashboard during 2012. We reviewed the processes used to address these investments and interviewed relevant officials. We also examined the results of performance reviews and interviewed officials to determine whether the agencies implemented the appropriate risk management processes to oversee and review the selected investments. We conducted this performance audit from February to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Below is the list of the investments that are included in this review, as well as the reported fiscal year 2012 IT spending. In addition to the contact named above, individuals making contributions to this report included Dave Hinchman (Assistant Director), Rebecca Eyler, Mike Mithani, Kevin Walsh, and Shawn Ward.
OMB launched the Dashboard in June 2009 as a public Web site that reports performance for major IT investments--on which the federal government plans to invest over $38 billion in fiscal year 2014. The Dashboard is to provide transparency for these investments and to facilitate public monitoring of them. After its launch, OMB began using it to identify at-risk investments. This report (1) characterizes the CIO ratings for selected federal agencies' IT investments as reported on the Dashboard over time, (2) determines the extent to which selected agencies' CIO ratings are consistent with investment risk, and (3) determines the extent to which selected agencies are addressing at-risk investments. GAO selected the eight agencies with the most reported major IT spending in fiscal year 2012 (excluding those GAO recently reviewed) and selected 10 investments at each agency. GAO reviewed the investments' documentation, compared it to the CIO ratings, and reviewed processes used for the highest-risk investments. GAO also interviewed appropriate officials. As of August 2013, the Chief Information Officers (CIO) at the eight selected agencies rated 198 of their 244 major information technology (IT) investments listed on the Federal IT Dashboard (Dashboard) as low risk or moderately low risk, 41 as medium risk, and 5 as high risk or moderately high risk. However, the total number of investments reported by these agencies has varied over time, which impacts the number of investments receiving CIO ratings. For example, Energy reclassified several of its supercomputer investments from IT to facilities and Commerce decided to reclassify its satellite ground system investments. Both decisions resulted in the removal of the investments from the Dashboard, even though the investments were clearly IT. In addition, the Office of Management and Budget (OMB) does not update the public version of the Dashboard as the President's budget request is being created. As a result, the public version of the Dashboard was not updated for 15 of the past 24 months, and so was not available as a tool for investment oversight and decision making. Of the 80 investments reviewed, 53 of the CIO ratings were consistent with the investment risk, 20 were partially consistent, and 7 were inconsistent. While two agencies' CIO ratings were entirely consistent, other agencies' ratings were inconsistent for a variety of reasons, including delays in updating the Dashboard and how investment performance was tracked. For example, the Department of Justice downgraded an investment in July 2012, but the Dashboard was not updated to reflect this until April 2013. Further, the Social Security Administration resets investment cost and schedule performance baselines annually, an approach that increases the risk of undetected cost or schedule variances that will impact investment success. Of the eight investments that were at highest risk in 2012, seven were reviewed by their agencies using tools such as TechStat sessions--evidence-based reviews intended to improve investment performance and other high-level reviews. Each of these resulted in action items intended to improve performance. The final investment was scheduled to have a TechStat, but instead, according to department officials, a decision was made to modify its program cost and schedule commitments to better reflect the investment's actual performance. GAO recommends that OMB make Dashboard information available independent of the budget process, and that agencies appropriately categorize IT investments and address identified weaknesses. OMB neither agreed nor disagreed. Six agencies generally agreed with the report or had no comments and two others did not agree, believing their categorizations were appropriate. GAO continues to believe its recommendations remain valid, as discussed.
The Trade Adjustment Assistance (TAA) program is the federal government’s primary program specifically designed to provide assistance to workers who lose their jobs as a result of international trade. In addition, to assist U.S. domestic industries injured by unfair trading practices or increases in certain fairly traded imports, U.S. law permits the use of trade remedies, such as duties on imported products. Currently, Labor certifies workers for TAA on a layoff-by-layoff basis. The process for enrolling trade-affected workers in the TAA program begins when a petition for TAA assistance is filed with Labor on behalf of a group of workers. Petitions may be filed by the employer experiencing the layoff, a group of at least three affected workers, a union, or the state or local workforce agency. Labor investigates whether a petition meets the requirements for TAA certification and is required to either certify or deny the petition within 40 days of receiving it. The TAA statute lays out certain basic requirements for petitions to be certified, including that a significant proportion of workers employed by a company be laid off or threatened with layoff and that affected workers must have been employed by a company that produces articles. In addition to meeting these basic requirements, a petition must demonstrate that the layoff is related to international trade in one of several ways: Increased imports—imports of articles that are similar to or directly compete with articles produced by the firm have increased, the sales or production of the firm has decreased, and the increase in imports has contributed importantly to the decline in sales or production and the layoff or threatened layoff of workers. Shift of production—the firm has shifted production of an article to another country, and either: the country is party to a free trade agreement with the United States; or the country is a beneficiary under the Andean Trade Preference Act, the African Growth and Opportunity Act, or the Caribbean Basin Economic Recovery Act; or there has been or is likely to be an increase in imports of articles that are similar to or directly compete with articles produced by the firm. Affected secondarily by trade—workers must meet one of two criteria: Upstream secondary workers—affected firm produces and supplies component parts to another firm that has experienced TAA-certified layoffs; parts supplied to the certified firm constituted at least 20 percent of the affected firm’s production, or a loss of business with the certified firm contributed importantly to the layoffs at the affected firm. Downstream secondary workers—affected firm performs final assembly or finishing work for another firm that has experienced TAA-certified layoffs as a result of an increase in imports from or a shift in production to Canada or Mexico, and a loss of business with the certified firm contributed importantly to the layoffs at the affected firm. Labor investigates whether each petition meets the requirements for TAA certification by taking steps such as surveying officials at the petitioning firm, surveying its customers, and examining aggregate industry data. In the surveys, Labor obtains information on whether the firm is now importing products that it had once produced or whether its customers are now importing products that the firm produced. They also obtain information on whether the firm has moved or is planning to move work overseas and, to identify a secondary impact, whether the layoff occurred due to loss of business with a firm that was certified for TAA. When Labor has certified a petition, it notifies the relevant state, which has responsibility for contacting the workers covered by the petition, informing them of the benefits available to them, and telling them when and where to apply for benefits. If Labor denies a petition for TAA assistance, the workers who would have been certified under the petition have two options for challenging this denial. They may request an administrative reconsideration of the decision by Labor. To take this step, workers must provide reasons why the denial is erroneous based on either a mistake or misinterpretation of the facts or the law itself and must mail their request to Labor within 30 days of the announcement of the denial. Workers may also appeal to the U.S. Court of International Trade for judicial review of Labor’s denial. Workers must appeal a denial to the Court within 60 days of either the initial denial or a denial following administrative reconsideration by Labor. Under TAA, workers certified as eligible for the program may have access to a variety of benefits: Training for up to 130 weeks, including 104 weeks of vocational training and 26 weeks of remedial training, such as English as a second language or adult basic education. Extended income support for up to 104 weeks beyond the 26 weeks of unemployment insurance (UI) benefits available in most states. Job search and relocation benefits fund participants’ job searches in a different geographical area and relocation to a different area to take a job. A wage insurance benefit, known as the Alternative Trade Adjustment Assistance (ATAA) program, pays older workers who find a new job at a lower wage 50 percent of the difference between their new and old wages up to a maximum of $10,000 over 2 years. A health coverage benefit, known as the Health Coverage Tax Credit (HCTC), helps workers pay for health care insurance through a tax credit that covers 65 percent of their health insurance premiums. In addition, case managers provide vocational assessments and counseling to help workers enroll in the program and decide which services or benefits are most appropriate. Local case managers also refer workers to other programs, such as the Workforce Investment Act, for additional services. The United States and many of its trading partners have used laws known as “trade remedies” to mitigate the adverse impact of certain trade practices on domestic industries and workers, notably dumping—when a foreign firm sells a product in the United States at a price below fair market value—and foreign government subsidies that lower producers’ costs or increase their revenues. In both situations, U.S. law provides that if dumped or subsidized imports injure a domestic industry, a duty intended to counter these advantages be imposed on imports. Such duties are known as anti-dumping and countervailing duties. In addition, in the event of an increase in imports of a certain product, safeguards, such as quotas or tariffs, may be applied to these products to provide an opportunity for domestic industries to adjust to increasing imports. As of March 30, 2007, there were 280 antidumping and countervailing duty orders, and according to officials at the ITC, there were no safeguard measures in place. The process for imposing a trade remedy begins when a domestic producer files for relief or when the Department of Commerce (Commerce) initiates the process, followed by two separate investigations: one by Commerce to determine if dumping or subsidies are occurring, and the other by the ITC to determine whether a domestic U.S. industry is materially injured by such unfairly traded imports or, in the case of safeguards, experiences serious injury from a rise in imports. As a result of an affirmative determination by both Commerce and the ITC in an antidumping or countervailing duty investigation, a duty is imposed on the imported good that can reflect the difference in the price in the foreign market and the price in the U.S. market, known as the “dumping margin,” or the amount of the foreign subsidy. In the case of an affirmative determination in a safeguard investigation, the ITC provides the President with one or more recommendations for remedying the situation, such as a tariff or quota on an imported product. The President may implement or modify the recommendations, or take no action due to U.S. economic or national security interests. Labor certified two-thirds of petitions that it investigated over the past 3 fiscal years, certifying nearly 4,700 petitions, covering an estimated 400,000 workers (see table 1). Over the past 3 fiscal years, the number of petitions certified has declined 17 percent, from nearly 1,700 in fiscal year 2004 to 1,400 in fiscal year 2006. This decline parallels a decline in the number of petitions filed. Labor has generally processed petitions in a timely manner over the past 3 fiscal years. Labor’s average processing time has remained relatively steady, taking on average 32 days to conduct an investigation and determine whether to certify or deny the petition. Labor met the requirement to process petitions within 40 days for 77 percent of petitions it investigated during fiscal years 2004 to 2006 (see fig. 1). Labor most often took only an additional day to process the remaining petitions, and 95 percent were completed within 60 days. Labor officials said that they are not always able to meet the 40-day time frame because they sometimes do not receive necessary information in a timely manner from company officials. In fiscal year 2006, the most common reason petitions were denied was that workers were not involved in producing an article, a basic requirement of the TAA program. Of the more than 800 petitions filed in fiscal year 2006 that were denied, 359 (44 percent) were denied for this reason (see fig. 2). Of those petitions denied because workers did not produce articles, most came from two industries, business services, such as computer programming, and airport-related services, such as aircraft maintenance (see app. II for the complete list of industries that had petitions denied by reason for the denial). During the past 3 fiscal years, workers appealed decisions in 16 percent of the approximately 2,600 petitions that Labor initially denied, with the vast majority appealed to Labor. Labor’s decisions were reversed in one-third of the appeals (see fig. 3). Labor officials told us that appeals are often reversed because Labor receives new information from petitioners or company officials, as part of the appeals process, that justifies certifying the petition. Although few denied petitions are appealed to the U.S. Court of International Trade—42 in the last 3 fiscal years—many of the recent appeals concern the issue of whether workers were involved in the production of articles. In fiscal years 2005 and 2006, Labor’s original denial was reversed in 13 cases appealed to the Court, and most of these cases addressed the issue of whether workers produced articles. Some of these cases concerned workers who produced software, which Labor had regarded as a service when the software was not contained in a physical medium, such as a CD-ROM. In 2006, Labor revised its policy, stating that software could be considered an intangible article because it would have been considered an article if it had been produced in a form such as a CD-ROM. Following this decision, a Labor official reported that Labor had certified 12 of 21 petitions investigated in the software and computer- related services industries. An industry certification approach based on three petitions certified within 180 days would likely increase the number of workers eligible for TAA, but presents some design and implementation challenges. For example, among the industries for which we could obtain complete data, we found that the number of additional workers eligible for TAA in those industries could more than double if no additional criteria were used or expand by less than 10 percent with relatively restrictive criteria. However, such an approach presents some design and implementation challenges. For example, designing the specific criteria an industry must meet to be certified could be challenging due to the possibility of making workers who lose their jobs for reasons other than trade eligible for TAA. In addition, it may be challenging to ensure that all workers in certified industries are notified of their potential eligibility for TAA, verify workers’ eligibility, and initiate the delivery of services to workers. From 2003 to 2005, 222 industries had three petitions certified within 180 days and therefore would have triggered an investigation to determine whether an entire industry should be certified, if such an approach had been in place at that time. These industries represented over 40 percent of the 515 industries with at least one TAA certification in those 3 years and included 71 percent of the workers estimated to be certified for TAA from 2003 to 2005. The 222 are a diverse set of industries, including textiles, apparel, wooden household furniture, motor vehicle parts and accessories, certain plastic products, and printed circuit boards (see app. III for a list of the 222 industries). The proposals for this approach include a requirement that an investigation be initiated after an industry meets the three certifications in 180 days criterion. This investigation would use some additional criteria to determine whether these certifications represent a broad industrywide phenomenon or just a collection of firms experiencing similar pressures from foreign trade. As a result, not all 222 industries would likely be certified industrywide. The additional criteria that an industry would have to meet to be certified have not yet been specified, but they could include factors such as the extent to which an industry has been impacted by imports, changes in production levels in the industry, or changes in employment levels. The number of workers that would become eligible for TAA through an industry certification approach depends on what additional criteria are established. For example, we analyzed 69 industries in the manufacturing sector for which we had comprehensive data on petitions, unemployment, trade and production. These industries represent about one-third of the 222 industries that would have been eligible for industrywide certification (13 percent of industries with petitions certified). If there were no additional criteria beyond the 3 petitions criteria and all of the 69 industries had been certified, the number of workers eligible for TAA in these industries would have more than doubled over the number that were actually certified under the current layoff-by-layoff process. However, if certification were limited to those industries that also had a 10 percent increase in the import share of the domestic market over a 1-year period, we estimated that the increase in eligible workers in these industries would have been more modest, at roughly 70 percent. Under a slightly more restrictive criterion—a 15 percent increase in the import share of the domestic market—the increase in the number of eligible workers would be less, an estimated 39 percent in those 69 industries (see fig. 4). If we were able to analyze the program as a whole the magnitude of the increases would likely be different. This would occur, in part, because the number of workers would increase only in those industries that met the three petition criterion and would not increase in those that did not meet the criterion. Thus the multiplier we developed for the 69 industries could not be applied broadly to all 515 industries with certified petitions. More stringent criteria would result in a smaller increase in the number of workers eligible for TAA. For example, if over a 3-year period, an industry were required to have a 15 percent increase in the import share of the domestic market in 1 year, as well as increases in the import share during the 2 other years, we estimated that there would have been a 9 percent increase in the number of workers eligible for TAA in the 69 industries we analyzed. (For further analysis of the 69 industries, see app. IV.) Although industry certification based on three petitions certified in 180 days is likely to increase the number of workers eligible for TAA, it also presents several potential design and implementation challenges. Designing additional criteria for certification. Any industrywide approach raises the possibility of certifying workers who were not adversely affected by trade. Even in industries that are heavily impacted by trade, workers could lose their jobs for other reasons, such as the work being relocated domestically. For example, Labor officials told us that they have denied petitions in the apparel industry, which has been heavily impacted by trade, because the layoff was not related to trade but occurred as a result of work being moved to another domestic location. The risk of certifying non-trade affected workers increases with more lenient criteria for industrywide certification. On the other hand, narrow criteria may limit the potential benefits of industry certification because few industries would be certified. Furthermore, using the same thresholds for all industries would not take into account industry-specific patterns in trade and other economic factors. For example, the import share of the domestic market may be volatile and change significantly from year to year in some industries, while other industries may experience smaller year-to- year growth in imports that could represent a significant impact over time. Determining appropriate duration of certification. Determining the length of time that an industry would be certified may also present challenges. If the length of time is too short, Labor may bear the administrative burden of frequently re-investigating industries that continue to experience trade-related layoffs after the initial certification expires. In addition, a shorter duration may make it difficult for workers to know whether their industry is certified at the particular time that they are laid off. As a result, workers may not know whether they need to file a regular TAA petition to become certified. However, if the time period is too long, workers may continue to be eligible for TAA even if conditions change and an industry is no longer adversely affected by trade. Defining the industries. How the industries are defined would significantly affect the number of workers who would become eligible for TAA through an industry certification approach. Our analysis defined industries according to industry classification systems used by government statistical agencies. However, some of these industry categories are broad and may encompass products that are not adversely affected by trade. On the other hand, certain products within an industry that, as a whole, does not show evidence of a trade impact may have been adversely affected by trade. For example, the men’s footwear industry might not be classified as adversely trade-impacted because it did not have a 15 percent increase in the import share of the domestic market over a 1-year period, but it is possible that certain types of men’s footwear, such as casual shoes or boots, could be adversely impacted by trade. More narrow definitions would reduce the possibility of certifying workers who are not adversely affected by trade, but doing so would cover fewer workers and could increase the administrative burden for Labor because it might have to investigate more industries. Notifying workers and initiating the delivery of services. Notifying workers of their eligibility for TAA has been a challenge and would continue to be under industry certification. Under the current certification process, workers are linked to services through the petition process. The specific firm is identified on the petition application, and state and local workforce agencies work through the firm to reach workers in layoffs of all sizes. However, getting lists of employees affected by layoffs and contacting them is sometimes a challenge for states and would remain so under industry certification. For industry certification, however, there are no such procedures in place to notify all potentially eligible workers in certified industries. For large layoffs in a certified industry, state and local workforce agencies could potentially use some of the processes they currently have in place to connect with workers, but it is not apparent that there would be a built-in link to workers in small layoffs. In large layoffs, firms with 100 or more employees are generally required to provide 60-days advance notice to state and local workforce agencies, who then work with the firm to provide rapid response services and inform workers about the various services and benefits available, including TAA. However, in smaller layoffs in certified industries, or when firms do not provide advance notice, workforce agencies would not know that the layoff has occurred and therefore would not be able to notify the workers of their eligibility for TAA. Verifying worker eligibility. Verifying that a worker was laid off from a job in a certified industry to ensure that only workers eligible for TAA receive TAA benefits may be more of a challenge under industry certification than under the current system. For example, it may be difficult to identify the specific workers who made a product in the certified industry if their employer also makes products that are not covered under industry-wide certification. In order to realize one of the potential benefits of industry certification—reduced processing time— this verification process would need to take less time than it takes workers to become certified through the layoff-by-layoff certification process. As we noted, Labor takes on average 32 days to complete its investigation of a petition, but it generally takes additional time for individuals to be notified of their eligibility. In addition, determining which entity would conduct this verification may also present challenges. A centralized process conducted by Labor would likely be unwieldy, while verification by state or local workforce agencies could take less time but ensuring consistency across the nation might prove challenging. Using trade remedies to certify industries for TAA could expand eligibility for workers in some industries, but the extent is uncertain and there are challenges associated with using trade remedies to identify trade-related job losses. Such an approach could expand eligibility because some trade remedies may cover areas in which there have been few or no certified TAA petitions. However, some trade remedies are for products that may already be covered by TAA petitions, so the number of workers eligible for TAA may not increase substantially in these areas. It is difficult to estimate the impact of this approach on eligibility because trade remedies are applied to specific products, and data on unemployment by product do not exist. In addition, this approach presents some of the same challenges as with an industry certification approach based on three petitions certified in 180 days. For example, workers who did not lose their jobs due to international trade could be made eligible for TAA in part because trade remedy investigations are focused on injury to an industry as a whole and not principally on employment impacts. Using trade remedies for industrywide certification could result in expanded worker eligibility for TAA in a number of industries. For example, 280 antidumping and countervailing duty orders covering over 100 products were in place, as of March 30, 2007. The number of workers eligible for TAA would increase under this approach in areas in which there have been few or no TAA petitions. For example, even though the ITC found that the domestic industry producing certain kinds of orange juice had been materially injured by imports, there do not appear to have been any certified TAA petitions for workers producing orange juice. However, the number of workers eligible for TAA may not increase substantially in certain areas in part because of overlap between trade remedies and TAA petitions. For example, over half of outstanding antidumping and countervailing duty orders are for iron and steel products, which have also received hundreds of petitions under TAA. However, even where the products covered by trade remedies and TAA overlap, eligibility could expand to some unemployed workers whose firms did not submit a petition or did not qualify under current TAA certification criteria. In addition, industries with trade remedies may not necessarily have experienced many trade-related job losses because the ITC investigates whether an industry as a whole has been injured and does not specifically focus on employment, according to an ITC official. Whereas Labor investigates whether increased imports contributed importantly to a layoff or threat of layoff, the ITC looks at a wide range of economic factors including but not limited to employment, such as sales, market share, productivity, and profitability. It is difficult to estimate the extent that industry certification based on trade remedies would increase the number of workers eligible for TAA because trade remedies are imposed on specific products coming from specific U.S. trade partners, and data are not available on job losses at such a detailed level. The product classifications for a given trade remedy can be very narrow, such as “carbazole violet pigment 23” or “welded ASTM A-312 stainless steel pipe.” Estimating the increase in the number of eligible workers would require unemployment information categorized by individual product, and these data do not exist. An approach using trade remedies presents some of the same challenges as an industry certification approach based on three petitions certified in 180 days. Workers who did not lose their jobs due to trade could possibly be made eligible for TAA under a trade remedy approach for several reasons. First, trade remedies are not necessarily an indicator of recent trade-related job losses, in part because the ITC’s process is not employment-focused and even recent injury determinations can be based on several prior years of data. For example, officials at Labor told us that trade remedies have not been useful in their investigations of TAA petitions because they are based on several years’ worth of information and can be unrelated to current industry and employment conditions. Furthermore, trade remedies are intended to mitigate the trade-related factors that caused the injury to the industry, so employment conditions in an industry could improve after the trade remedy is in place. In addition, as with the other industrywide approach, notifying workers in industries with trade remedies and connecting them with services would also be a challenge, as well as verifying that they were laid off from a certified industry. The verification process could be particularly challenging with an approach based on trade remedies because of the narrow product classifications of some trade remedy products. In firms that make multiple products, for example, more than one type of stainless steel pipe, it may be difficult to identify which specific workers worked on the products subject to trade remedies. An ITC official also expressed concern that seeking a waiver to share information collected during injury investigations with Labor could hamper ITC’s ability to collect confidential business information from firms. By statute, the ITC cannot share with other government agencies business proprietary information submitted to it in a trade remedies investigation without a waiver from the submitter. We provided a draft of this report to the Department of Labor and the International Trade Commission. The Department of Labor did not comment. The International Trade Commission provided technical comments, which we incorporated into the report as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of the report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Labor, the Chairman of the International Trade Commission, relevant congressional committees, and other interested parties. Copies will also be made available to others upon request. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Key contributors to this report are listed in appendix V. Our objectives were to determine: (1) trends in Labor’s certification of Trade Adjustment Assistance (TAA) petitions, (2) the extent to which industry certification based on three petitions certified within 180 days would increase the number of workers eligible for TAA, and (3) the extent to which certification of industries subject to trade remedies would increase the number of eligible workers. We also identified potential challenges with an industry certification approach. To address these questions, we analyzed Labor’s data on TAA petitions, the Bureau of Labor Statistics’ Mass Layoff Statistics data, the Census Bureau’s data on trade and production, and the International Trade Commission’s data on trade remedies. In examining recent trends in Labor’s certification of TAA petitions, we analyzed Labor’s petitions data from fiscal years 2004 to 2006. However, in estimating the extent to which industry certification would increase the number of eligible workers, we analyzed data from calendar years 2003 to 2005 because that was the most recent time period that trade, production, and Mass Layoffs Statistics data were all available. In addition, we interviewed officials at Labor and the International Trade Commission. We conducted our work from January to June 2007 in accordance with generally accepted government auditing standards. To determine trends in Labor’s certification of TAA petitions, we analyzed Labor’s data for petitions filed from fiscal years 2004 to 2006. We assessed the reliability of key data by interviewing Labor officials knowledgeable about the data, observing a demonstration of the database, reviewing our prior assessments of the data, and conducting edit checks. For a small number of petitions, we identified logical inconsistencies or missing values in the data. We brought these issues to the attention of Labor officials and worked with them to correct the issues before conducting our analysis. Complete data on reasons petitions were denied were only available for fiscal year 2006 because Labor only began to collect the data in 2005. As a result, we reported information on reasons petitions were denied for only fiscal year 2006. In analyzing the number of petitions denied for TAA that were appealed to Labor, we did not include in our analysis petitions that were appealed only for a denial of Alternative Trade Adjustment Assistance (ATAA) wage insurance benefits. These petitions had been certified for TAA after Labor’s initial investigation but denied for wage insurance benefits. In analyzing data on petitions that were appealed to the U.S. Court of International Trade, we compared Labor’s data to Court documents. We determined that Labor’s data on appeals to the Court were not complete. As a result, we supplemented Labor’s data with a review of Court documents. At the time of our review, some petitions filed during fiscal years 2004 to 2006 may still have been undergoing an appeals process. Our analysis of petition decisions and appeals reflect the outcomes of petitions at the time of our review. Despite these limitations, we determined that Labor’s petitions data were sufficiently reliable for the purposes of this report, which was to provide information on trends in Labor’s certification of petitions. To estimate the extent to which industry certification based on three petitions certified in 180 days would increase the number of workers eligible for TAA, we first analyzed Labor’s data on petitions certified from calendar years 2003 to 2005 to identify which industries would have had three petitions certified in 180 days in that time frame. We then analyzed the Bureau of Labor Statistics’ Extended Mass Layoff Statistics survey to determine the number of workers who had been laid off in those industries. We used this data as a proxy for the full number of workers that would have been eligible for TAA had an industrywide certification process been in place at the time. We also collected Census trade data (imports and exports) by industry and Census production data. Of the 222 industries that had three petitions certified in 180 days from 2003 to 2005, we were able to analyze 69 industries for which we had comprehensive data. The available data sources used different industry classification systems which we matched to each other. Labor’s petitions data classified industries according to the Standard Industrial Classification System (SIC), while the Mass Layoff Statistics and Census’ trade and production data classified industries based on variations of the North American Industry Classification System (NAICS). In addition, our production data, from Census’ Annual Survey of Manufacturers, was limited to manufacturing industries. We were able to find complete and well-defined matches (one-to-one NAICS to SIC, or many-to-one NAICS to SIC) between the SIC- and NAICS-defined industries for 69 of the 222 industries. We could not use data on industries where a NAICS code corresponded to multiple SIC codes. Because the 69 industries were not drawn from a random sample, the results of our analysis are not necessarily representative of the entire 222 industries. In analyzing Labor’s petitions data to identify the industries that would have met the three petitions certified in 180 days criteria during 2003 to 2005, we added an additional criterion to our analysis, that the three petitions had to be from three different companies. When companies lay off workers at multiple divisions or locations, they may file separate petitions for each division or location. We added the criterion to prevent an industry from becoming eligible to be considered for certification when the three petitions were from the same company. In estimating the number of workers certified for TAA under the current program, we used estimates from Labor’s petitions data on the number of workers affected by a layoff at the time that TAA petitions are filed with the Department of Labor. At the time petitions are submitted, companies may not know exactly how many workers will be affected. The Department of Labor does not collect information on the number of workers ultimately certified. We used these data to estimate the increases in the number of workers who might be eligible for TAA with the addition of an industry certification approach. They should not be relied upon to support precise numbers on workers certified for TAA. In estimating the increase in worker eligibility, we used data from the Bureau of Labor Statistics’ Mass Layoff Statistics program to estimate the number of unemployed workers in industries that could have been eligible for industrywide certification. The Mass Layoff Statistics program collects reports on mass layoff actions that result in workers being separated from their jobs. Monthly mass layoff numbers are from establishments which have at least 50 initial claims for unemployment insurance (UI) filed against them during a 5-week period. Extended mass layoff numbers (issued quarterly) are from a subset of such establishments—private- sector nonfarm employers who indicate that 50 or more workers were separated from their jobs for at least 31 days. We used Extended Mass Layoffs to reduce the possibility of including workers who were on temporary layoff and subject to recall. Several limitations of the Mass Layoff Statistics data are relevant to this analysis. First, Mass Layoff Statistics only include workers from larger firms (workers with at least 50 employees). It also only includes workers laid off through larger layoffs (layoffs of at least 50 employees). In 2003, there were 1,404,331 initial claims in the Extended Mass Layoffs. In contrast, estimated unemployment due to permanent layoffs in 2003 was 2,846,000, and total unemployment in 2003 was 8,774,000. Thus, workers involved in extended mass layoffs represented 49 percent of permanent layoffs and 16 percent of total unemployment in 2003. Second, the Bureau of Labor Statistics suppressed MLS data on the number of layoffs and the number of workers in the data they provided to us, when the number of layoffs in an industry was less than three. Approximately 40 percent of the data were suppressed. The results reported here reflect the following imputation criterion. Suppressed values were imputed with the mean number of laid-off workers per layoff event within the broad industry group (by year), multiplied by 1.5. We also conducted sensitivity analysis where we used more conservative imputation criteria; results were not highly sensitive to the criterion we selected. In order to identify changes in trade related to individual industries, we calculated the share of imports in the domestic market for each year between 2002 and 2005. We defined the domestic market as U.S. domestic production (measured by shipments) minus exports plus imports. We then calculated the change in the import share of the domestic market from 2002 to 2003, 2003 to 2004, and 2004 to 2005. We examined the distribution of these changes annually across industries and calculated sample statistics. The mean change for the 69 industries was 6.1 percent in 2003, 13.01 percent in 2004, and 3.5 percent in 2005. The standard deviation for these industries was 11.0 percent in 2003, 30.25 percent in 2004, and 13.74 percent in 2005. We also compared these statistics to those of the entire population of 222 industries and found that they were similar. Based on this analysis, we determined that using criteria of annual changes of 10, 15, and 20 percent for the share of imports in the domestic market was reasonable to illustrate the impact of changing criteria on the number of potential workers eligible for TAA. We also examined compound annual changes across all 4 years and found similar results. To determine the extent to which eligibility for TAA would expand if trade remedies were used to certify industries for TAA, we reviewed the industries and products covered by antidumping and countervailing duties. To the extent possible, we assessed areas of overlap between TAA petitions and trade remedies. We also compared the eligibility criteria for TAA with ITC’s process for determining if an industry has been injured. To identify potential challenges with industry certification, we interviewed officials at the Department of Labor and International Trade Commission. Tables 5 and 6 list the 222 industries that had three petitions certified in 180 days from 2003 to 2005. Table 5 lists the 69 industries we were able to analyze with trade and unemployment data, and table 6 lists the industries we were not able to analyze due to data limitations. To assess the sensitivity of the criteria to changes in the trade threshold, we analyzed the 69 industries for which we had complete data using a range of thresholds. We found that more stringent criteria sometimes resulted in appreciable differences in the number of workers eligible for TAA in those industries. For example, if to be certified, an industry not only had to have a 10 percent increase in the import share of the domestic market in 1 year but also had an increase in the import share during the 2 other years between 2003 and 2005, we estimated that there would have been a 24 percent increase in the number of workers eligible for TAA in the 69 industries we analyzed (see table 7). Because the 69 industries for which we had comprehensive data were not selected randomly, the results cannot be generalized to the entire group of 222 industries that met the three petitions certified in 180 days criteria nor are they predictive of future levels of eligible workers. Dianne Blank, Assistant Director Yunsian Tai, Analyst-in-Charge Michael Hoffman, Rhiannon Patterson, and Timothy Wedding made significant contributions to all aspects of this report. In addition, Kim Frankena assisted with research on trade remedies, and Rachel Valliere provided writing assistance. Christopher Morehouse, Theresa Lo, Mark Glickman, Jean McSween, and Seyda Wentworth verified our findings. Trade Adjustment Assistance: Program Provides an Array of Benefits and Services to Trade-Affected Workers. GAO-07-994T. Washington, D.C.: June 14, 2007. Trade Adjustment Assistance: Changes Needed to Improve States’ Ability to Provide Benefits and Services to Trade-Affected Workers. GAO-07-995T. Washington, D.C.: June 14, 2007. Trade Adjustment Assistance: Changes to Funding Allocation and Eligibility Requirements Could Enhance States’ Ability to Provide Benefits and Services. GAO-07-701, GAO-07-702. Washington, D.C.: May 31, 2007. Trade Adjustment Assistance: New Program for Farmers Provides Some Assistance, but Has Had Limited Participation and Low Program Expenditures. GAO-07-201. Washington, D.C.: December 18, 2006. Trade Adjustment Assistance: Labor Should Take Action to Ensure Performance Data Are Complete, Accurate, and Accessible. GAO-06-496. Washington, D.C.: April, 25, 2006. Trade Adjustment Assistance: Most Workers in Five Layoffs Received Services, but Better Outreach Needed on New Benefits. GAO-06-43. Washington, D.C.: January 31, 2006. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004.
Trade Adjustment Assistance (TAA) is the nation's primary program providing job training and other assistance to manufacturing workers who lose their jobs due to international trade. For workers to receive TAA benefits, the Department of Labor (Labor) must certify that workers in a particular layoff have lost their jobs due to trade. Congress is considering allowing entire industries to be certified to facilitate access to assistance. GAO was asked to examine (1) trends in the current certification process, (2) the extent to which the proposed industry certification approach based on three petitions certified in 180 days would increase eligibility and identify potential challenges with this approach, and (3) the extent to which an approach based on trade remedies would increase eligibility and identify potential challenges. To address these questions, GAO analyzed data on TAA petitions, mass layoffs, trade, production, and trade remedies. GAO also interviewed Labor and ITC officials. GAO is not making recommendations at this time. Labor reviewed the report and did not provide comments. The ITC provided technical comments that have been incorporated as appropriate. During the past 3 fiscal years, Labor certified about two-thirds of TAA petitions investigated and generally processed petitions in a timely manner. Labor certified 4,700, or 66 percent, of the 7,100 petitions it investigated from fiscal years 2004 to 2006. Labor took on average 32 days to make a certification decision and processed 77 percent of petitions within the required 40-day time frame. According to Labor officials, they were not always able to meet the 40-day time frame because they sometimes did not receive information from company officials in a timely manner. In fiscal year 2006, 44 percent of the petitions that Labor denied were because workers were not involved in the production of an article. An industry certification approach based on three petitions certified in 180 days would likely increase the number of workers eligible for TAA but presents some design and implementation challenges. However, the extent of the increase in eligible workers depends on the additional criteria, if any, industries would have to meet to be certified. From 2003 to 2005, 222 industries had three petitions certified within 180 days. Based on our analysis of 69 of these industries for which we could obtain complete data, the number of eligible workers in these industries could more than double if no additional criteria were used, but would expand by less than 10 percent if industries had to meet more restrictive criteria, such as demonstrated increases in the import share of the domestic market over a 3-year period. Designing the criteria presents challenges due to the possibility of making workers who lose their jobs for reasons other than trade eligible for TAA. Implementation challenges include notifying all workers of their potential eligibility, verifying their eligibility, and linking them with services. Using trade remedies to certify industries could also expand eligibility for workers in some industries, but challenges exist. While basing industry certification on trade remedies could expand eligibility in areas where there have been no TAA petitions, some trade remedies are for products already covered by TAA petitions, such as iron and steel products. It is difficult to estimate the extent of the impact on worker eligibility because trade remedies are applied to specific products, and data on unemployment by product do not exist. This approach presents many of the same challenges as industry certification based on three petitions certified in 180 days. For example, workers who did not lose their jobs due to international trade could be made eligible for TAA because trade remedy investigations are not focused on employment. In addition, verifying workers' eligibility may be particularly challenging due to the narrow product classifications of some trade remedy products, such as carbazole violet pigment 23. In companies that make multiple products, it may be difficult to identify which specific workers made the product subject to trade remedies.
DOD significantly downsized its acquisition workforce in the 1990s and 2000s as part of an overall reduction of military and civilian defense personnel after the end of the Cold War. According to DOD’s April 2010 Strategic Human Capital Plan Update, the department decreased its core acquisition workforce from about 146,000 people in fiscal year 1998 to about 126,000 in fiscal year 2008. Meanwhile, the number of major defense acquisition programs increased from 76 to 93 and total estimated costs to acquire them increased from nearly $805 billion to more than $1.6 trillion. The systems engineering and test and evaluation career fields were affected by the workforce cuts. For example, in 2008 the Defense Science Board reported that the Army’s test and evaluation workforce was reduced by more than 55 percent from 1991 through 2007. In response to cuts in these career fields, DOD reduced its emphasis in these areas and/or relied on prime contractors to analyze and interpret developmental testing data. The Defense Science Board also found that many weapon programs were failing initial operational testing due to a lack of a disciplined systems engineering approach. Additionally, in 2008 the National Academy of Sciences reported that there were no longer enough systems engineers to fill programs’ needs. The Academy observed that DOD cannot outsource its technical and program management experience and intellect and still expect to acquire new systems that are both effective and affordable. Over the past several years, the Congress has called for DOD to improve its acquisition workforce and increase emphasis on systems engineering and developmental testing during weapon systems development. For example, the National Defense Authorization Act for Fiscal Year 2008 required the Secretary of Defense to establish the DOD Acquisition Workforce Fund for the recruitment, training, and retention of DOD acquisition personnel, which includes the systems engineering and test and evaluation career fields. In addition, the Reform Act contains a number of systems engineering and developmental testing requirements aimed at helping weapon programs establish a solid foundation prior to the start of development. For example, the Deputy Assistant Secretaries are expected to review and approve acquisition planning documents for major defense acquisition programs, as well as monitor program activities. In response to the Reform Act legislation, DOD established new offices within the Office of the Secretary of Defense for systems engineering and developmental testing to provide assistance to program offices as they are developing their acquisition strategies prior to the start of development and then oversee program office efforts to implement the strategies. The systems engineering office has about 120 programs in its portfolio and the developmental test and evaluation office has about 250 programs. In addition, they provide advocacy, oversight and guidance for their respective workforces throughout DOD. The Secretary of Defense also announced plans to increase the number of people performing acquisition activities by almost 20,000 employees through new hiring actions and by converting contractor positions to government positions (insourcing) between fiscal years 2009 and 2015. The Strategic Human Capital Plan Update indicates that 22 percent of the almost 20,000 position growth will be for systems engineering and 1 percent for test and evaluation, which would increase the size of the career fields by 16 percent and 5 percent, respectively. The majority of these new positions will be in the military services at headquarters, program offices, and test range locations. The services maintain test ranges for development testing activities. Currently, 24 ranges are designated as part of the Major Range and Test Facility Base (MRTFB) because they have unique test capabilities that are used by multiple services. The Test Resource Management Center is responsible for ensuring that the MRTFB is adequately funded and maintained while the services oversee the remaining non-MRTFB ranges and facilities, which perform more service-specific testing. The new offices of the Deputy Assistant Secretary of Defense for Systems Engineering and Developmental Test and Evaluation have continued to make progress implementing the Reform Act requirements. Both offices, which provide assistance to program managers and perform oversight for decision makers, have issued additional policies and guidance, and are overseeing and providing information on more weapon acquisition programs than they did last year. The systems engineering office will be staffed at 163 people in fiscal year 2012 and the developmental test and evaluation office will be staffed at 63 people, which is less than the developmental test and evaluation office originally projected. Yet, the offices are more reliant on contractor support than the Deputy Assistant Secretaries would like. In addition, many current and former testing officials continue to believe the developmental test and evaluation office does not have the resources or influence to effectively oversee and affect program decisions. However, it is unclear how many people are needed. DOD established the two offices for systems engineering and developmental test and evaluation within the Office of the Secretary of Defense in June 2009. Since then, both offices have increased staffing, which is enabling them to meet statutory requirements for assisting and overseeing their portfolios of defense acquisition programs. The table below shows the actual and planned workforces for both offices. Most of the staffing increases have been through hiring contractor employees. While both offices have also increased their government staff, hiring freezes have curbed their ability to hire additional government employees and for the developmental test and evaluation office to meet its authorized staffing goal. The systems engineering office was authorized 28 government employees for fiscal year 2011, but has not yet been given permission to advertise for all of the positions because of the hiring freeze. According to the Deputy Assistant Secretary for Developmental Test and Evaluation, the office was initially authorized to have 70 employees, but will be capped at 63 in fiscal year 2011, 4 of which are detailees from the Test Resource Management Center. In their fiscal year 2010 joint annual report to the Congress, the Deputy Assistant Secretaries reported on their ongoing efforts to implement Reform Act requirements. The table below includes information that was included in the report or provided to us on their fiscal years 2009 and 2010 activities. As shown, the offices have continued or increased their activities. In addition to the progress highlighted in the table above, the offices are also taking actions in areas that they had not acted upon last year— issuing required guidance on the development and tracking of performance criteria and exercising a Reform Act option of designating the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation for concurrent service as the Director of the Test Resource Management Center.  Developing performance criteria: The Reform Act requires the Deputy Assistant Secretaries, in coordination with an official designated by the Secretary of Defense, who was the Director of the Performance Assessments and Root Cause Analysis, to jointly issue guidance on the development and tracking of detailed measurable performance criteria for major defense acquisition programs. In response to this requirement, the Deputy Assistant Secretaries, in cooperation with the Director of Performance Assessments and Root Cause Analysis, have agreed that each office shall develop guidance within their respective functional areas in coordination with each other and openly share the data and findings of those performance criteria in conducting their oversight. The Deputy Assistant Secretary for Systems Engineering developed a set of time-based metrics to assess each program’s ability to execute its system engineering plans and address risks the office had identified in prior reviews. The metrics measure program cost, schedule, staffing, reliability, availability and maintainability, software, integration, performance and manufacturing, and are to be incorporated into each program’s systems engineering plan and evaluated at various points in the development process. Criteria developed by the Deputy Assistant Secretary for Developmental Test and Evaluation focus on early acquisition lifecycle activities to ensure that sound developmental testing planning is performed from the beginning of development. Other criteria measure program results and are meant to provide an objective foundation to assess a program’s subsequent developmental testing performance as it approaches the production decision and the assessment of operational test readiness. The office plans to pilot test the metrics on six programs, with a goal of rolling them out to other programs by the end of 2011.  Changing leadership of Test Resource Management Center: Effective April 1, 2011, the Principal Deputy Under Secretary of Defense for Acquisition, Technology and Logistics designated the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation for concurrent service as the Director of the Test Resource Management Center. DOD had not acted upon this optional Reform Act provision last year. Both offices will continue to be managed separately and report to different authorities—developmental test and evaluation office activities will be reported to the Assistant Secretary of Defense for Research and Engineering and Test Resource Management Center activities will be reported directly to the Under Secretary of Defense for Acquisition, Technology and Logistics. According to the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation, the Principal Deputy Under Secretary for Acquisition, Technology and Logistics asked him to complete a study in the near future to identify efficiencies that can be obtained by merging some of the offices’ activities. A study was started in July 2011. The Deputy Assistant Secretary for Developmental Test and Evaluation indicated that there could be some limitations on his ability to streamline management and reporting activities or shift resources between the organizations because the Test Resource Management Center is designated by statute to be a field activity and the organizations are funded separately. DOD has not studied the possible legal ramifications of combining the offices. In their fiscal year 2010 joint annual report to the Congress, the Deputy Assistant Secretaries also identified several focus areas for improvement. For example, the systems engineering office plans to reestablish the DOD Software Working Group to improve DOD’s capability to address systemic software program issues. Among other things, the developmental test and evaluation office wants to develop a responsible test organization model that the services would use to designate the lead government test organization responsible for overseeing and/or conducting the developmental test and evaluation for an acquisition program. In addition, the office wants to issue a policy requiring programs to prioritize use of government capabilities and to provide a cost-benefit analysis when they decide to provide funding to use and/or develop test capabilities at prime contractor sites. In a departure from the fiscal year 2009 joint annual report, the fiscal year 2010 report did not contain a discussion of the extent to which weapon acquisition programs are fulfilling their systems engineering or test and evaluation master plans. Further, the fiscal year 2010 report did not include a discussion of test and evaluation waivers or deviations that programs have received. Instead, the report identified the type of reviews or engagements both of the offices participated in for various programs. The Deputy Assistant Secretaries stated that they did not provide the information in the fiscal year 2010 report because they were directed to streamline the report. In May 2011, the Senate Armed Services Committee requested DOD to supplement the fiscal year 2010 report with this information as required by the Reform Act. DOD has not yet provided the information. The Deputy Assistant Secretaries believe they have had a positive influence on weapon acquisition programs over the past 2 years during milestone reviews with senior department leaders and through recommendations to program offices. Although the Assistant Secretary of Defense for Research and Engineering represents both systems engineering and developmental test positions at Defense Acquisition Board meetings, the Deputy Assistant Secretaries said that either they, or designees, also attend the meetings. They said that they have been asked by the Principal Deputy for Acquisition, Technology and Logistics to provide direct input about weapon acquisition programs at some of these meetings, including discussions on the Gray Eagle unmanned aircraft system, Ohio Replacement submarine, and the Joint Strike Fighter aircraft programs. The Deputy Assistant Secretaries also said that program offices are making changes based on the recommendations made by their offices during regular program assessments and technical reviews. While the Deputy Assistant Secretaries have made progress implementing the Reform Act requirements, we identified several organizational challenges that could limit their effectiveness. A summary of each of these challenges is presented below.  Reliance on contractor employees: The Deputy Assistant Secretaries rely heavily on contractors to help perform office activities. In fiscal year 2010, for example, nearly 85 percent of the staff in the systems engineering office and 67 percent of the staff in the developmental test and evaluation office were contractors. Both Deputy Assistant Secretaries would like to have a larger proportion of government employees because they believe it is important to maintain a core cadre of people with the required institutional knowledge and skills to support current and future program office needs. However, they are not optimistic about their chances of getting additional government employees because of a civilian hiring freeze.  Developmental Test and Evaluation Office influence: Current and former DOD test and evaluation officials continue to believe the developmental test and evaluation office could be more effective in its oversight role with the proper influence. For example, they pointed out that the office’s primary avenue for voicing concerns about weapon acquisition programs to senior leaders is at overarching integrated product team meetings that take place in preparation for Defense Acquisition Board meetings. The integrated product team leader ultimately decides which organizations will get to present issues at the Defense Acquisition Board meetings. Current testing officials told us that in some cases developmental testing issues do not make the Defense Acquisition Board meeting agendas, which is a concern to officials who believe the developmental test and evaluation office should provide independent assessments of weapon acquisition programs directly to senior leaders. In addition, officials said that the deputy directors who attend the overarching meetings are not at the senior executive level like other meeting attendees, which officials said in some cases has reduced their relative influence during these meetings. To ensure developmental testing information, such as the assessment of operational test readiness, receives appropriate consideration by senior leaders, the Defense Science Board recommended in 2008 that the office report developmental testing issues directly to the Deputy Under Secretary of Defense for Acquisition and Technology. Currently, the office reports through an intermediary—the Assistant Secretary of Defense for Research and Engineering. Even though the systems engineering office reports through the same reporting channel as the developmental test and evaluation office, the Deputy Assistant Secretary for Systems Engineering believes his office has the appropriate amount of influence. This is because the systems engineering office’s primary emphasis is on assisting program managers in the development of their systems engineering plans. In contrast, the Deputy Assistant Secretary for Developmental Test and Evaluation believes his office should put about equal effort into assisting and assessing program office activities.  Developmental Test and Evaluation Office resources: Information provided by the developmental test and evaluation office shows the office can not provide full coverage of its portfolio of about 250 acquisition programs given its current workforce. For example, in fiscal year 2010 when the office had 30 people, 89 programs (36 percent) did not receive any support, including the Air and Missile Radar program and sub elements of the Early Infantry Brigade Combat Team program, such as the Small Unmanned Ground Vehicles and Unmanned Aircraft System. Although staffing has doubled between fiscal years 2010 and 2011, the office still has not been able to support all the programs the Deputy Assistant Secretary for Developmental Test and Evaluation believes it should. The Deputy Assistant Secretary for Developmental Test and Evaluation said the office has had to be selective in using its resources and he has introduced a “triage” strategy for dealing with the overload of programs relative to the office’s workforce. This strategy includes dropping virtually all programs below Acquisition Category I from the developmental testing oversight list and eliminating oversight of some major automated information systems. For the most part, the office focuses its efforts on major defense acquisition programs between milestone B (development start) and milestone C (the production decision) in order to retain sufficient depth with programs in development. It is providing minimal coverage to programs prior to the start of development, which is the most opportune time to influence a program’s acquisition strategy. Based on the “triage” strategy, the Deputy Assistant Secretary for Developmental Test and Evaluation believes only about half of the current portfolio of about 250 programs would receive the level of support he believes is needed with a staff of 63 people. On the other hand, the Deputy Assistant Secretary for Systems Engineering believes his office, which is supposed to have 163 people in fiscal year 2012, will have enough staff to oversee its portfolio of 120 programs. While current and former DOD testing officials provided reasons for increasing the size of the developmental test and evaluation office, they could not specify the appropriate size for the office, as they indicated the issue has not been thoroughly analyzed. Officials familiar with the establishment of the office told us that three staffing scenarios were considered prior to the office being established—a high, medium, and low staffing scenario—but they indicated that no detailed analysis was done to support any of the scenarios. Under the high staffing scenario, the developmental test and evaluation office would have had 250 people, which would have matched the size of the office of the Director of Operational Test and Evaluation. The medium level of staffing, 120-150 people, was based on the size of a legacy developmental testing organization and the low level, which called for 90 people, was based on an assumption about the number of programs each person would be responsible for overseeing. As shown earlier in table 1, the staffing goal is 70 people, which is fewer than the lowest staffing scenario considered. Former testing officials believe an opportunity exists to both increase the developmental testing office’s influence and address resource concerns by merging Test Resource Management Center and developmental testing office activities. They believe this would give the Deputy Assistant Secretary for Developmental Test and Evaluation the most flexibility in how to allocate resources. They also pointed out that in the early 1990s, oversight of all developmental test and evaluation activities had been under one organization that reported directly to the Under Secretary of Defense for Acquisition. The military services have been increasing the number of people in their systems engineering and test and evaluation career fields, but challenges exist that could impede future workforce growth plans as well as testing at the ranges, if not properly addressed. The services planned to increase their systems engineering and test and evaluation career fields by about 5,000 people (14 percent) and about 300 people (4 percent), respectively, between fiscal years 2009 and 2015 through hiring actions and by insourcing contractor positions. These increases are part of DOD’s overall efforts to increase the number of people in acquisition career fields. The services have increased the systems engineering career field by about half of their projections and exceeded their planned growth for the test and evaluation career field through the end of fiscal year 2010. However, budget cuts and a clarification in DOD’s insourcing approach may make hiring civilians more challenging in the future. The services’ developmental test ranges are also experiencing declining budgets, as the fiscal year 2012 budget includes cuts of nearly $1.2 billion over the next 5 years to support accounts that pay for overhead costs. The services plan to cut range personnel in response to the budget reductions, which could offset some of the workforce gains they have already achieved. They have not yet determined how the cuts will be allocated across the ranges or the impact they will have on meeting program office needs. Further budget cuts are possible based on the recent debt ceiling agreement, but details are unknown. Currently, the services lack common performance metrics that would assist in making funding decisions. The services planned to increase their acquisition systems engineering and test and evaluation career fields by 14 percent and 4 percent, respectively, between fiscal years 2009 and 2015 through hiring and insourcing actions. This would increase the overall systems engineering career field by about 5,000 people from about 35,000 to 40,000 people and the test and evaluation career field by almost 300 people from about 7,400 to 7,700 people. These new positions would be located at service headquarters, program offices, and test range locations. It should be noted that insourcing actions alone do not result in real growth to the collective number of civilians, military, and contractors performing an activity because it only involves the transfer of positions from contractors to the government. Hiring additional civilian employees, on the other hand, would result in growth, assuming the contractor and military workforces remain stable. The following table shows the baseline, goal, and current number of civilian and military personnel performing systems engineering and test and evaluation activities for each of the services at the end of fiscal year 2010, as well as the percentage of the growth goal target achieved. Through the end of fiscal year 2010, the services have made significant progress towards increasing the two career fields. For example, the services increased the systems engineering career field by about 2,600 people, about half of the projected growth. Collectively, they have achieved 94 percent of the growth goal target planned for that career field. The services achieved 104 percent of the growth goal target planned for the test and evaluation career field by adding over 600 people. Information provided by the Army and Navy shows about 74 percent of their systems engineering career field growth was through new hires and the remaining 26 percent was through insourcing. About 86 percent of their test and evaluation career field growth was through new hires and the remaining 14 percent was through insourcing. Recently proposed budget changes would result in modifications to the services’ workforce growth plans for both acquisition career fields. The services still plan to increase the systems engineering career field between fiscal years 2011 and 2015, but with about 800 positions less than originally planned. As a result, career field growth would be 10 percent instead of 14 percent. On the other hand, the services plan to hire 400 more people than they originally planned for the test and evaluation career field, despite the fact that the overall number of DOD civilians is frozen at fiscal year 2010 levels. This would result in a 6 percent growth to the career field instead of 4 percent. Service officials stated that achieving additional career field growth, however, could be difficult because of a recent clarification in DOD’s insourcing policy. According to a March 2011 memorandum issued jointly by the Under Secretary of Defense for Acquisition, Technology and Logistics and the Under Secretary of Defense Comptroller/Chief Financial Officer, a case-by-case approach will be used for additional insourcing of acquisition functions based on critical need, whether a function is inherently governmental, and the benefit demonstrated by a cost-benefit analysis. The memorandum also states that additional insourcing must be supported by current budget levels. In cases where added insourcing would breach the existing civilian ceilings, the proposal and associated justification must be provided to the Director of Human Capital Initiatives and then the proposal will be reviewed by the two Under Secretaries of Defense issuing the memorandum and approved by the Deputy Secretary of Defense. Of the nearly 2,000 people the Army and Navy now plan to hire for the systems engineering career field between fiscal years 2011 and 2015, 96 percent is to be achieved through insourcing. And about 40 percent of the over 200 people they plan to hire for the test and evaluation career field are to be through insourcing. Although the services have increased their test and evaluation staff, each of the 12 test ranges we visited experienced a mixture of recruiting, hiring, training, or retention challenges. For example, Pacific Missile Range Facility officials stated the range’s location on the island of Kauai, Hawaii, has made it difficult to recruit personnel due to a lower pay grade structure and higher costs of living compared to other test ranges. Range officials stated that it can take months to hire new employees, forcing many qualified applicants to seek employment opportunities elsewhere. Test ranges have begun using acquisition workforce expedited hiring authority, which allows DOD components to streamline the process for making offers to qualified acquisition personnel. However, ranges have interpreted this authority differently. Service test and evaluation executives said they would clarify the policy to the ranges based upon our observations to ensure the policies are fully understood. Most of the test range officials we spoke with also had concerns about the timeliness and quality of the Defense Acquisition University’s training classes. The Defense Acquisition University is coordinating with the services to address the increased demand for acquisition training. Finally, some test ranges are having difficulty retaining engineers or some software specialties because the private sector can pay them a higher salary. Officials at Aberdeen Test Center, Maryland, are concerned that many of their employees will take higher paying jobs with organizations that are moving into their area as the result of a Base Realignment and Closure decision. The services’ developmental test range budgets are being reduced, but the full impact of the cuts has yet to be determined. The fiscal year 2012 President’s Budget includes cuts of nearly $1.2 billion (17 percent) through fiscal year 2015 to the ranges’ institutional funding accounts that fund operational and overhead expenses such as personnel, facilities, and equipment costs. According to service officials, the budget cuts are a result of direction they received late in the fiscal year 2012 budget cycle to keep their civilian workforce at the same level as they had at the end of fiscal year 2010. Research, development, test and evaluation accounts that fund range testing activities absorbed a large portion of the cuts. As shown in figure 1, the services had already planned to reduce their MRTFB research, development, test and evaluation funding between fiscal years 2012 and 2015. Although range budgets reached a peak in fiscal year 2011, budget estimates for that year projected funding decreases in subsequent years. According to service officials these reductions were expected due to expected workload reductions, savings associated with implementing efficiency initiatives, and a civilian pay freeze. The fiscal year 2012 budget contains additional, more significant decreases beyond those forecasted in the previous year’s budget. In total, this budget provides $5.7 billion for the ranges between fiscal years 2012 and 2015—a 17 percent reduction from the $6.9 billion forecast in the fiscal year 2011 budget. Most notably, the Army and Air Force range budgets were each cut over $100 million in fiscal year 2012 alone. The Chairman of the Senate Armed Services Committee recently expressed concern that these proposed reductions threaten to seriously undermine the implementation of the Reform Act’s requirement to rebuild DOD’s systems engineering and developmental testing organizations and requested that the proposed cuts be reviewed. The Army and Air Force are in the process of reviewing these cuts and determining where they will be made. To minimize the impact on these activities, they are examining potential reductions to their overhead or administrative staff at headquarters and field locations. According to service headquarters officials, if DOD’s proposed budget cuts between fiscal years 2012 and 2015 remain intact, they may have to cut some of their developmental test capabilities as well as personnel who conduct tests. This could offset some of the test and evaluation career field gains already achieved over the past 2 years. Test officials said that reductions in test personnel could limit the amount of testing performed on weapon acquisition programs, which could increase the risk associated with those programs and/or result in an extension of a program’s test schedule. Final decisions on where to take the cuts have not yet been made. Therefore, we could not determine the impact funding cuts would have on the ranges’ ability to meet program office testing needs. Range officials indicated that prior to these proposed cuts they had difficulty maintaining their test capabilities. The National Defense Authorization Act for Fiscal Year 2003 required the Secretary of Defense by fiscal year 2006 to establish the funding objective of ensuring that the overhead and institutional costs of the MRTFB ranges were fully funded through the major test and evaluation investment accounts and to ensure that DOD customers were charged not more than the direct costs of testing. The law also required DOD to establish the Test Resource Management Center and required that the director of that organization certify whether the services’ proposed budgets for test and evaluation activities are adequate. To comply with this law, the services increased their range operating budgets by over 50 percent in fiscal year 2006. Although MRTFB funding was fairly stable between fiscal years 2006 and 2011, range officials said they have had difficulties maintaining their test capabilities because the infrastructure is aging and operating costs for expenses like utilities and fuel have grown at a higher rate than their overall funding. As a result, officials said they have had to move money from their range modernization accounts to fund operating costs, only fix things that are broken or in emergency status, or fund capability upgrades over several years instead of a shorter period of time. According to range officials, these challenges are less of a problem for non-MRTFB ranges like the Redstone Test Center and other test/laboratory activities funded through working capital fund accounts, where customers pay direct and overhead costs. Service officials discussed several strategies or a combination of those strategies that the services could use to lessen the impact of budget cuts and funding concerns. One strategy is that the ranges may have to curtail certain testing, mothball or close test facilities, or consolidate test capabilities. Another option could be to move certain test capabilities out of the MRTFB management structure, thereby allowing ranges to charge customers the full cost of testing. Additionally, on the basis of our range observations, service officials said more specific guidance for interpreting financial management regulations on what constitutes direct and overhead charges for MRTFB operations could be developed to clarify and standardize the types of costs that can be passed on to customers. Service test and evaluation executives and the Deputy Assistant Secretary for Developmental Test and Evaluation believe the ranges’ current policy interpretation is too restrictive, inconsistent across the MRTFB, and constrains service options as they strive to sustain MRTFB capabilities. The Test Resource Management Center recently established a team to study the issue. The services have not implemented common range performance measures that would help them justify funding and assist them in making workforce decisions, how best to allocate funding, or make difficult decisions about mothballing, closing, or consolidating test capabilities, if necessary. Although ranges collect performance data relevant to their operations, these indicators may not be useful in making higher-level infrastructure decisions that cut across several ranges. According to service officials, it is very difficult to develop a common set of performance measures because of the uniqueness of each range and its variable capacity. We have found that performance measures can assist managers in making decisions about future strategies, planning and budgeting, identifying priorities, and allocating resources. Some efforts are under way to provide decision makers more information, but they are still in process and are not yet approved or implemented. The Test Resource Management Center has sponsored an effort to develop a comprehensive set of range metrics. According to the services, early efforts were not successful and were not well received. While the Air Force plans to use metrics resulting from the Test Resource Management Center’s effort, the Army is evaluating the development of a readiness reporting system and a workload model for its ranges that could provide a better basis for investment or divestment decisions. The Navy is also in the process of developing metrics for its MRTFB test capabilities on the systems’ condition, capacity, competency, and importance. Once developed, these metrics are expected to assist decision makers in directing future investment funding and to ensure test capabilities are adequate to support programs. The Reform Act points to the need for DOD to develop more robust systems engineering and developmental test and evaluation capabilities. DOD established new organizations and the services developed resource plans in order to increase emphasis on these activities. From an organizational standpoint, the offices of the Deputy Assistant Secretaries for Systems Engineering and Developmental Test and Evaluation are providing assistance to program managers in developing acquisition plans and providing oversight of program efforts for the Under Secretary of Defense for Acquisition, Technology and Logistics and the Congress. However, the developmental testing office is not as robust or efficient as it could be in part due to resource and organizational constraints. In addition, while the military services have made significant progress to date in increasing their systems engineering and developmental testing workforce capabilities, planned workforce reductions could offset these gains. It is incumbent upon DOD to provide the most effective systems engineering and developmental testing capability it can afford. It should have a sound analytical basis for establishing and resourcing that capability. However, it is not clear that DOD is yet at this point, especially for developmental testing. DOD has not conducted analysis on the right size of the developmental test and evaluation office or captured data that would reinforce range funding and workforce actions, or suggest needed adjustments. Additionally, the Deputy Assistant Secretary for Developmental Test and Evaluation believes that there may also be a statutory provision that limits his ability to achieve efficiencies and address office challenges. To the extent DOD cannot provide adequate systems engineering and developmental testing support to its weapons portfolio, the risks of executing the portfolio within cost and schedule are increased. We recommend that the Secretary of Defense take the following two actions:  assess the resources and influence needed by the developmental test and evaluation office to assist and oversee defense acquisition programs, including  the number of defense acquisition programs that can be supported by different developmental test and evaluation office staffing levels, including specifying the total number of personnel, the mix of government and contractor employees, and the number of senior executive service personnel needed for each of these staffing levels;  whether the Test Resource Management Center and the office of the Deputy Assistant Secretary for Developmental Test and Evaluation should be combined or resources shifted between organizations to more effectively support the activities of both organizations and if so, identify for Congress any statutory revisions that would be necessary; and the proper reporting channel, taking into account the decision on whether or not to combine the organizations, the statutory oversight requirements, and the level of influence needed to oversee and assess program office developmental testing and service budgeting activities.  develop a plan to implement the results of the assessment. We also recommend that the Secretary of Defense, with input from the military services, take the following two actions:  develop metrics to assess the MRTFB test capabilities (expanding to DOD non-MRTFB, and non-DOD government test facilities once an approved set of metrics are in place supporting the MRTFB), justify funding, and assist in making decisions on the right-sizing of personnel, how best to allocate funding, or make future decisions on whether to mothball, shut down, or consolidate test facilities. These efforts should be coordinated with the Test Resource Management Center. report the impact budget cuts reflected in the fiscal year 2012 budget, as well as the insourcing policy clarification, will have on their (1) total workforce (civilians, military, and contractors) that support both of these activities and (2) ability to meet program office systems engineering and developmental test and evaluation needs. Contingent upon the results of DOD’s assessment, the Congress may want to consider revising any applicable statutory provisions necessary to allow for DOD to combine or shift resources between the Test Resource Management Center and the office of the Deputy Assistant Secretary for Developmental Test and Evaluation. DOD provided us with written comments on a draft of this report. DOD concurred with two recommendations and partially concurred with two others. DOD offered suggested wording changes to the recommendations where it partially agreed to offer greater clarity. We agreed with the suggested changes and reworded our recommendations accordingly. DOD’s comments appear in appendix II. DOD also provided technical comments, which we incorporated as appropriate in the report. In its response, DOD noted that the Deputy Assistant Secretary for Developmental Test and Evaluation has directed a study to assess the resources and influence needed by the developmental test and evaluation office to assist and oversee defense acquisition programs. The Deputy Assistant Secretary for Development Test and Evaluation will develop a plan to implement actionable recommendations from the results of the assessment, with the approval of the Under Secretary of Defense for Acquisition, Technology and Logistics. The Deputy Assistant Secretary for Developmental Test and Evaluation also plans to establish a working group, with participation from the military services, to develop metrics to assess MRTFB test capabilities, justify funding, and assist in making decisions on human capital and test facilities management. Finally, the Deputy Assistant Secretaries for Systems Engineering and Developmental Test and Evaluation plan to identify any impacts to the state of the workforce due to funding modifications or DOD’s workforce policy updates in their joint annual report to the Congress. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available at no charge on the GAO web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report examines the military services’ systems engineering and developmental testing workforce capabilities and the Department of Defense’s (DOD) efforts to implement Weapon Systems Acquisition Reform Act of 2009 (Reform Act) requirements. Specifically, we examined (1) the progress DOD has made in implementing the Reform Act’s systems engineering and developmental testing requirements and (2) whether there are challenges at the military service level that could affect their systems engineering and developmental testing activities. To measure DOD’s progress in implementing Reform Act requirements related to systems engineering and developmental testing, we interviewed officials and collected pertinent documents from the offices of the Deputy Assistant Secretary of Defense for Systems Engineering and Developmental Test and Evaluation. Specifically, we collected information on their efforts to develop additional policy and guidance; review and approve acquisition planning documents of major acquisition programs; monitor and review activities of major acquisition programs; and develop guidance for the development of performance metrics to use on weapon acquisition programs. We also reviewed their staffing plans and questioned the Deputy Assistant Secretaries, as well as former DOD developmental testing experts on whether each of the offices has the necessary amount of resources and influence to fulfill their missions. In order to determine whether there are challenges at the military service level that could affect future systems engineering and developmental testing activities, we looked at planned and actual workforce growth and developmental test range activities, and specifically at the following.  We collected and compared the military services’ original workforce growth projections with their current workforce projections for the systems engineering and test and evaluation career fields. For this report, when we use the term systems engineering career field, we are referring to the Systems Planning, Research Development, and Engineering—Systems Engineering and Program Systems Engineering career fields. The Air Force and Army’s original plans were based on their fiscal year 2008 budget estimate submissions and the Navy’s was based on its fiscal year 2009 budget estimate submission. Current plans for each of the services were based on their fiscal year 2012 budget estimate submissions. Workforce data covered the fiscal years 2009-2015 timeframe for both the original and current plans. We interviewed officials within the Office of the Secretary of Defense and military services to determine the underlying causes for variances between the two plans and to determine how the Secretary of Defense’s cost efficiency measures are affecting new hiring and insourcing plans. Navy officials stated that they used fiscal year 2009 as their baseline because before that time, information from various commands was not centralized and they could not verify the accuracy of the numbers. Navy officials stated that they have confidence in the validity of the numbers that support the fiscal year 2009 baseline because factors that affect workforce counts, such as recodes, decodes, reassignments, and attrition, are treated consistently.  We compared the services’ baseline workforce data with end of fiscal year 2010 workforce numbers to determine how much growth occurred in the two career fields over the past 2 fiscal years. The Air Force’s and Army’s workforce baseline was the end of fiscal year 2008 and the Navy’s was the end of fiscal year 2009. We assessed the reliability of these data by interviewing knowledgeable officials in the Air Force, Army, and Navy about the processes they use to ensure both the integrity and reliability of their manpower workforce databases used to track acquisition personnel. We determined that the data were sufficiently reliable for the purposes of this report.  We conducted site visits to 12 ranges and facilities—11 of these are designated as part of the Major Range and Test Facility Base (MRTFB) and one is a non-MRTFB range—to determine challenges affecting DOD’s ability to conduct developmental testing. We focused our discussions on a broad range of topics, including funding, workforce, range facilities and instrumentation, encroachment, contractor duplication of DOD test facilities, and test resource capacity and demand. We selected four ranges per service based on geographical diversity, level of funding, type of testing, and recommendations from the Office of the Secretary of Defense and the services. The test ranges we visited were: Aberdeen Test Center, Maryland; Air Force Flight Test Center, California; Arnold Engineering Development Center, Tennessee; Atlantic Undersea Test and Evaluation Center, Bahamas; 46th Test Group, New Mexico; 46th Test Wing, Florida; High Energy Laser Systems Test Facility, New Mexico; Naval Air Warfare Center – Aircraft Division, Maryland; Naval Air Warfare Center – Weapons Division, California; Pacific Missile Range Facility, Hawaii; Redstone Test Center, Alabama; and White Sands Missile Range, New Mexico.  Finally, we compared developmental test range funding included in the President’s Budget Future Year’s Defense Plan for fiscal year 2011 and 2012. We discussed differences between the two plans and how decreases in the fiscal year 2012 plan would affect developmental testing activities with officials from the Test Resource Management Center, the military services, and the office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation. We also discussed with these officials the progress DOD and the services have made in developing metrics that could be used to make workforce and investment decisions. We conducted this performance audit from September 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, Bruce Thomas, Assistant Director; Cheryl Andrew; Rae Ann Sapp; Keith Hudson; Laura Greifner; and Marie Ahearn made key contributions to this report.
For the past 2 years, the Department of Defense (DOD) has been implementing the Weapon Systems Acquisition Reform Act (Reform Act) requirements for systems engineering and developmental testing. These activities are important to DOD's ability to control acquisition costs, which increased by $135 billion over the past 2 years for 98 major defense acquisition programs. GAO was asked to determine (1) DOD's progress in implementing the Reform Act's requirements and (2) whether there are challenges at the military service level that could affect their systems engineering and developmental testing activities. To do this, GAO analyzed implementation status documents, discussed developmental testing office concerns with current and former DOD officials, and analyzed military service workforce growth plans and test range funding data.. The new offices for systems engineering and developmental test and evaluation are continuing to make progress implementing Reform Act requirements. Since GAO's 2010 report on this topic, the Deputy Assistant Secretaries for Systems Engineering and Developmental Test and Evaluation have issued additional policies and guidance, assisted more weapons acquisition programs in the development of acquisition plans, and provided input to senior leaders at Defense Acquisition Board meetings. DOD also designated the Deputy Assistant Secretary for Developmental Test and Evaluation for concurrent service as the Director of the Test Resource Management Center. This was an optional Reform Act provision, which places oversight of testing resources and acquisition program developmental testing activities under one official. Despite these steps, the developmental test and evaluation office reports having difficulty covering its portfolio of about 250 defense acquisition programs with its current authorized staff of 63 people. Current and former testing officials believe the office needs more influence and resources to be effective, but they said thorough analysis has not been done to determine the appropriate office size. Further, according to the Deputy Assistant Secretary for Developmental Test and Evaluation, a statutory provision that designates the Test Resource Management Center as a field activity may limit his ability to achieve management and reporting efficiencies that could be obtained by combining or shifting resources between the two organizations. GAO has a matter for Congressional consideration to allow shifting resources between the Test Resource Management Center and the developmental test and evaluation office. The military services are facing workforce challenges that could curb systems engineering and developmental testing efforts, if not properly addressed. The services planned to increase their systems engineering and test and evaluation career fields by about 5,000 people (14 percent) and about 300 people (4 percent), respectively, between fiscal years 2009 and 2015 through hiring actions and converting contractor positions to government positions. The services have increased the systems engineering career field by about half of its projections and exceeded its planned growth for the test and evaluation career field through the end of fiscal year 2010. However, future growth may be difficult because of budget cuts and a clarification in DOD's insourcing approach, which may make civilian hiring more difficult. For example, the services now plan to hire about 800 fewer systems engineers by 2015 than they originally projected. Further, cuts to development test ranges' fiscal year 2012 budgets of nearly $1.2 billion (17 percent) over the next 5 years could offset some of the workforce gains already achieved. Currently, the services lack metrics that could be used to justify funding levels, effectively allocate funding cuts, make workforce decisions, or make difficult decisions related to mothballing, closing, or consolidating test capabilities, if future budget cuts are necessary. To the extent DOD cannot provide adequate systems engineering and developmental testing support to its weapon systems portfolio, the risks of executing the portfolio within cost and schedule are increased. GAO recommends that DOD assess the resources needed by the developmental test and evaluation office, develop a plan to implement the assessment, develop metrics to aid funding decisions, and report the effect budget cuts are having on the services' ability to meet program office needs. GAO also has a matter for congressional consideration. DOD concurred with two recommendations, and offered clarifying language, which GAO incorporated, on the other two recommendations for which DOD partially concurred.
For many years, DOE obtained plutonium for nuclear weapons by reprocessing spent fuel from some of its nuclear reactors. Before the spent fuel could be reprocessed, however, it had to be stored temporarily in water basins so that short-lived fission products could decay, reducing radiation levels to the point where the fuel could be handled with less danger to workers and less damage to chemicals used to extract the plutonium. When Hanford’s fuel-processing facilities were permanently shut down in 1992, DOE had no strategy for dealing with the stockpiled fuel. Hanford currently has more than 2,100 metric tons of spent fuel—about 80 percent of the Department’s total inventory. DOE and the Defense Nuclear Facilities Safety Board have identified the following safety problems at its spent fuel storage basins: The spent fuel, which is made of uranium and has been irradiated in a nuclear reactor, was not intended for long-term storage in water and is corroding and crumbling. The fuel is stored in canisters at the bottom of the basins, which contain about 16 feet of water to absorb heat from the radioactive decay of the fuel and shield workers from radiation. Because DOE’s spent fuel is designed to be dissolved during processing in order to remove the plutonium it contains, only a thin coating (called cladding) was placed over the uranium; in some cases, this cladding is broken or damaged. As a result, uranium, plutonium, and other radioactive materials from the spent fuel have accumulated in the bottoms of storage canisters, in sludge at the bottom of the storage basins, and in the filters and other basin components. The two storage basins, which were not designed for the long-term storage of spent fuel, are vulnerable to leaks and earthquake damage. Constructed in 1951, the basins are now well beyond their expected useful life of 20 years. They are seismically unsound, and at least one has leaked water directly to the surrounding soils. For example, from 1974 through 1979, about 15 million gallons of contaminated water leaked from one basin. The same basin leaked again in 1993. In both incidents, it is likely that contamination reached the Columbia River. The buildings that house the basins are also inadequate, and the location is environmentally precarious. The buildings are not airtight and allow sand, dirt, and dust to enter the water basins, contributing to the buildup of sludge at the bottom of each pool. The basins are about 1,400 feet away from the Columbia River—much too close, in the view of DOE and other parties interested in protecting the river from environmental damage. The existing storage poses risks of exposing workers, the public, and the environment to radioactive materials. For example, an earthquake or even an industrial accident could cause a basin to rupture, releasing large quantities of contaminated basin water to the soil and the Columbia River. The loss of water in the basins could also expose workers and the public to the airborne transmission of radioactive materials released from the corroded fuel and the sludge in the bottom of the basins. Evaluations conducted in 1993 by DOE and 1994 by the Safety Board concluded that improving the spent fuel’s storage was urgently needed. Because of the deteriorated condition of the spent fuel and one of the storage basins, the Safety Board described the situation as one where “imminent hazards could arise within two or three years,” a time period that has already passed. The Safety Board recommended that, among other things, DOE take action to improve the storage of spent fuel on a high-priority, accelerated basis. DOE has responded to the safety concerns about the spent fuel by developing a plan to dry the fuel and store it farther from the Columbia River. This strategy, which has been evolving since 1994, includes cleaning and packaging the fuel in the basins, removing and drying the fuel at a new processing facility to be built near the basins, and transporting the fuel to a new interim storage facility on the Hanford site several miles from the river. The fuel will be stored there in sealed containers until its final disposition occurs, in a permanent geologic repository. The project also involves treating and disposing of the sludge, debris, and water left in the basins after the fuel is removed. The spent fuel project is organized into subprojects and includes constructing two major facilities—a fuel-drying facility and a canister storage facility. The project also involves designing and constructing a transportation system to move the fuel between the facilities; special canisters to hold the fuel; and various systems and processes to clean, package, and dry the fuel and to move the 14-foot-long canisters to their storage tubes once inside the canister storage building. Although the dry storage of spent fuel is common in the commercial nuclear industry, the specific form of spent fuel that DOE uses—metallic uranium—is more difficult to dry because of its tendency to ignite when in contact with air. Because the spent fuel project at Hanford involves handling, conditioning, and packaging nuclear materials, and DOE eventually expects to be subjected to external regulation for nuclear safety, DOE has required safety standards for the project equivalent to the Nuclear Regulatory Commission’s requirements, to the extent practicable. DOE wants to achieve at least three goals through this project—eliminating the continued corrosion of the fuel; quickly reducing the safety risks to workers, the public, and the environment; and lowering the costs associated with the safe storage of the fuel until its final disposition occurs. DOE believes that the dry storage of the spent fuel presents the best option for achieving these goals. The original project schedule established in April 1995 called for starting the fuel’s retrieval in December 1997 and completing the project in September 2001. The schedule has been revised twice since then, and in April 1998, the contractors proposed a new schedule that would begin the fuel’s retrieval in November 2000 and complete the project by December 2005. (See table 1.) DOE’s recurring revisions to the project’s schedule reflect an uncertainty about DOE’s ability to meet a firm completion date—an uncertainty that has not abated. For example, the April 1998 proposed schedule revision has occurred for two reasons. First, Duke Engineering officials identified additional schedule slippage beyond the dates shown in the December 1997 schedule. Second, EPA and Ecology have demanded enforceable milestones in the Tri-Party Agreement for the spent fuel project. DOE’s Assistant Manager for Waste Management said that DOE did not want to establish project milestones with its regulators on the basis of the December 1997 schedule, which was an accelerated schedule with high risk, only to have the dates change before the negotiations were complete. Milestone commitments under the Tri-Party Agreement represent contractual commitments that, if not met, can result in fines being assessed against DOE. To establish a schedule that it had a better chance of meeting, and one that hopefully would not change again during negotiations with the regulators, in April 1998, Duke Engineering and Fluor Daniel proposed a schedule that includes contingency for unforeseen problems. DOE and its regulators expect to have an enforceable agreement on the project’s milestones by July 31, 1998. With each change in the schedule, the estimates of the project’s total costs have increased. The original cost estimate was about $740 million, while the cost estimate associated with the April 1998 proposed schedule is about $1.4 billion—an 84 percent increase. (See table 2.) In a February 9, 1998, letter to this Committee, DOE stated that the estimates of spent fuel project costs of over $1 billion and 9 years to complete the project (the December 1997 revision) were still less than early estimates of the cost of addressing the spent fuel storage problem at Hanford, which were projected to cost up to $2 billion and take up to 15 years to complete. DOE reasoned that, even with the cost increases now being experienced, the project would still cost roughly $1 billion less and take 6 years less than the original project concept. However, we believe that to evaluate cost and schedule performance on a specific project that DOE is implementing, current estimates need to be compared with the original estimates for this project. Such a comparison shows that estimated costs have nearly doubled and that the project’s duration has been extended by over 4 years. Cost and schedule growth on a DOE major system acquisition project (generally projects costing $100 million or more) are not unusual. In our 1996 report on DOE’s major system acquisitions, we reported that at least half of the ongoing projects and most of the completed projects had cost overruns and/or schedule slippage. Some of the reasons for cost overruns and schedule slippage are similar to the management problems that have occurred on this spent fuel project. For example, our 1996 report notes that the management of one project was criticized for insufficient attention to technical, institutional, and management issues. Also in that report, we cited a 1993 report that described the causes of cost increases in the environmental restoration program, including design changes, poor project definition, and turnover within the project team. We noted in our report that in recent years, DOE had implemented several initiatives to improve its overall management of these large projects. The schedule for the project that DOE approved in April 1995 was intentionally optimistic and had virtually no contingency for unforeseen problems. DOE insisted on a very optimistic schedule for the project because of the deteriorated condition of the spent fuel and the storage basins and the need to resolve the storage problems expeditiously. In addition, DOE officials thought that a tight schedule would force Westinghouse to accomplish the project more quickly. However, virtually no schedule or cost flexibility existed to address either the technical problems disclosed after characterizing the fuel or other changes implemented as the project progressed. When the initial project proposal was presented to DOE’s Assistant Secretary for Environmental Management in November 1994, under the proposed schedule, the removal of the fuel from the basins was to begin in December 1998, and the project was to be completed by April 2006. The proposal was developed by Westinghouse, the company that was the management and operations contractor at Hanford until October 1996. In order to meet those dates, the proposal included several shortcuts to normal DOE procedures, such as (1) allowing some of the project’s activities, including procurement actions, to start before the Environmental Impact Statement Record of Decision was issued and (2) allowing a “fast-track” approach in which the project’s activities—such as the characterization of the fuel, design of facilities and equipment, safety analyses, and construction—would proceed concurrently instead of sequentially. The Assistant Secretary approved the proposal but directed that the schedule be compressed so that beginning the retrieval of the fuel could be moved up by 18 months—to June 1997. DOE and Westinghouse officials reevaluated the project and determined that there was virtually no chance that the date specified by the Assistant Secretary could be met. Westinghouse proposed—and DOE accepted—a compromise schedule under which retrieval of the fuel would start by December 31, 1997—12 months earlier than under the initial Westinghouse proposal. Westinghouse estimated that it had an 80-percent chance of meeting the date. As part of its program to reduce costs while accelerating cleanup, DOE rewarded Westinghouse for the estimated savings associated with adjusting the project to an accelerated schedule by making an additional incentive fee payment. The program, called Challenge 170, provided incentives for Westinghouse to improve its productivity and eliminate unnecessary work. Westinghouse’s fee was calculated as a percentage of the productivity increase or reduction in work scope. Westinghouse submitted formal change requests on the spent fuel project to reflect the compressed schedule and other savings, and DOE estimated that these actions would save $6.9 million, of which, about $2.5 million could be attributed to accelerating the schedule. In 1996, DOE paid Westinghouse as much as $368,000 in fees for these expected savings. However, because of certain cost increases in future years, these claimed savings appear to be deferrals of work that were performed in succeeding years that would not be eligible for an award fee. DOE would need to conduct a detailed analysis to determine if the $2.5 million was a savings. It is unclear, according to a DOE contracting officer, if DOE has the contractual authority to demand that Westinghouse repay any of the $368,000 fee if DOE determines that the savings did not materialize. He said that at the time of our review, DOE had not assessed its legal basis for seeking repayment of the fee. Unfortunately, the compressed schedule, including the deadline of December 31, 1997, for beginning the retrieval of the fuel had little chance of being achieved because it was based on an underlying premise that the project would encounter few problems that would affect the schedule. Therefore, the schedule had virtually no flexibility to address unforeseen problems. Actual experience, however, did not match the assumption; problems were encountered that changed the scope of work required and affected the project’s schedule. For example: Characterization activities after the schedule was established revealed several surprises, including the poor condition of the fuel in closed canisters (which necessitated developing a new water treatment system for one basin); the presence of aluminum hydroxide on some of the fuel, which in part, led to a redesign of the storage canisters to better withstand any buildup of hydrogen gas; and the presence of uranium particles and polychlorinated biphenyls (PCBs) in the pool sludge, which necessitated chemical treatment of the sludge before disposal. Strategies for drying the fuel and for controlling pressure in the fuel storage canisters changed significantly. Fuel-drying strategies evolved from a hot-conditioning system, in which the fuel would be heated to 300 degrees centigrade; to a two-step approach of cold drying at 50 degrees centigrade followed by hot conditioning; to a strategy approved in April 1998 to eliminate hot conditioning and rely solely on the cold drying of the fuel. For the fuel storage canisters, the strategy evolved from using closed containers with a pressure relief system to sealed containers with no pressure relief. These changes occurred as DOE and its contractors learned more about the characteristics of the spent fuel and incorporated new strategies to increase safety and reduce long-term storage costs. According to the Assistant Manager for Waste Management at Hanford, the optimistic schedule for the project reflected a trade-off between the health and safety risks associated with continuing to store the spent fuel in the basins and the management risks associated with setting an optimistic schedule and knowing that it would be difficult to meet. The Assistant Manager said that DOE wanted to resolve the basin storage problems quickly and that DOE was also looking for ways to improve the performance of its contractors. Setting an optimistic schedule for the spent fuel project was one strategy that DOE used to try to obtain improved performance. During the project’s history, two different companies have been responsible for the project. Westinghouse, which at the time, was the overall Hanford management and operations contractor for DOE, managed the project from its inception in late 1994 until October 1996. In October 1996, the responsibility for the spent fuel project shifted to Duke Engineering, a company that has managed the project as a subcontractor to Fluor Daniel, the new site contractor. Under both Westinghouse and Duke Engineering, the project has suffered from management problems that exacerbated the problems already inherent in the project’s schedule. The following are examples: Westinghouse had difficulty implementing sound project management practices. According to DOE officials and available documentation, Westinghouse did not use consistent and reliable estimating procedures to develop project baseline costs or make effective use of the baseline schedule as a tool to manage the project. For example, although Westinghouse developed a baseline for the project and had a system for controlling changes to the baseline, DOE officials said that Westinghouse did not have a sound planning basis for some of its cost estimates. DOE found that some cost estimates had inadequate supporting documentation and that estimating procedures were not applied consistently. In addition, when Duke Engineering assumed responsibility for the project in October 1996, several adjustments had to be made to the project baseline to incorporate more realistic estimates of the time needed to accomplish certain tasks and to add work scope not previously included in the schedule. The former Westinghouse spent fuel project director disagreed that these adjustments were needed and said that when Westinghouse turned over the project to the new contractors in October 1996, the project was on schedule and within budget. DOE officials at Hanford, however, said the Westinghouse schedule did not have a good planning basis or sound justifications and that it never met the requirements for baseline management and control that Westinghouse agreed to in the project management plan. Severe weather design requirements were not incorporated into the canister storage building in a timely manner. The former Westinghouse spent fuel project director said that neither the requirement nor the options for meeting it were clear, but DOE’s position has been that the requirement was clear and that Westinghouse failed to implement the requirement in a timely manner, extending the construction schedule for the building and jeopardizing the start of spent fuel retrieval from the basins. Duke Engineering was unable to keep the various subprojects working in accordance with the project’s schedule. When Duke Engineering took responsibility for the project in October 1996, with the help of Fluor Daniel, it established an integrated project baseline schedule and a system of controlling change that DOE approved in April 1997. However, DOE’s evaluation of the project in September 1997 found significant problems with the management of the project, including Duke Engineering’s poor management and contracting practices. For example, DOE criticized Duke Engineering for the weak management of subcontractors when the subcontractors did not perform satisfactorily. In October 1997, the Safety Board reported that the project had management deficiencies, including the inadequate identification of problems, inadequate actions to resolve problems, and a failure to communicate changes and performance expectations to project personnel. Fluor Daniel and DOE officials told us that, in their opinion, these problems occurred because Duke did not have the management and technical expertise in place to properly manage the project. The current president of Duke Engineering told us he agreed with these assessments. Duke Engineering also has had problems with the management and staffing of the project’s safety analysis documentation effort. The Safety Board, in its October 1997 report, concluded that a key element in the ultimate success of the project was the timely completion of a number of subproject safety analyses. We found delays in getting the safety documentation prepared and approved on various subprojects. For example, the approval of the final safety analysis report for the canister storage building is now 9 months later than originally planned. The safety analysis report for the cold vacuum drying facility is now 13 months late and was recently rejected by DOE as unacceptable. Because safety documentation for the drying facility is on the “critical path” for the project schedule, delays in completing the documentation will add an estimated 46 days to the overall project. According to DOE’s Assistant Manager for Waste Management, safety documentation has been inadequate because of problems with the underlying engineering and design work on the project. He said that Duke Engineering and its subcontractors have not been doing acceptable engineering work on the project and that Duke Engineering did not have people with sufficient skills assigned to prepare safety documents. Duke Engineering officials agreed that there have been design and engineering problems associated with the safety documentation process. They said that they recently added staff with additional safety expertise and made other changes to address these problems. Although DOE and Fluor Daniel were aware of the problems that were causing the schedule to slip, their oversight actions were not effective in resolving those problems. During the time that Westinghouse was still in charge of the project, DOE officials did not take aggressive action to improve contractor performance. These officials said they could not get Westinghouse to develop a sound planning basis for the project baseline and cost estimate or improve its management controls. According to DOE’s spent fuel project manager and the Assistant Manager for Waste Management at Hanford, Westinghouse staff did not have the skills necessary to implement project management techniques. However, we found only limited documentation showing that DOE officials were pointing out these deficiencies to Westinghouse and asking for improved performance. The former Westinghouse spent fuel project director disagreed with this assessment of Westinghouse’s performance. He said that Westinghouse had made substantial progress on the project and that DOE’s evaluations of Westinghouse’s performance were generally positive. DOE’s Assistant Manager for Waste Management told us that Westinghouse did make progress while managing the project but that, overall, Westinghouse’s performance was mixed and left considerable room for improvement, especially in the areas of developing a sound basis for the project baseline and making effective use of project management tools. He added that DOE was trying to get improved performance from Westinghouse by focusing performance reviews on the positive aspects of performance, not by emphasizing the deficiencies. In addition, the Manager of DOE’s Richland Operations Office told us that changing site management contractors in October 1996 from Westinghouse to Fluor Daniel resulted from DOE’s actions to deal with Westinghouse’s performance, including the problems with Westinghouse’s performance on the spent fuel project. After Duke Engineering assumed responsibility for the project as part of the new site management contract, Fluor Daniel provided primary oversight of the project for DOE. According to Fluor Daniel’s spent fuel project director, it became apparent in December 1996 that Duke Engineering was struggling to establish an integrated project baseline, and in April 1997, after the baseline was approved, Duke Engineering had difficulty ensuring that the various subprojects stayed on schedule. She described Fluor Daniel’s oversight of Duke Engineering as increasing in scope and intensity during this period as Duke Engineering struggled to perform on the project. During this period, DOE officials expressed to Fluor Daniel several concerns about the project, including poor quality assurance, unresolved technical issues, and schedule slippage. In its evaluation of Fluor Daniel’s performance at Hanford during fiscal year 1997, DOE gave the company a marginal rating for its performance on the spent fuel project. While giving Fluor Daniel credit for having a strong project director and for aggressively pursuing project-related issues, DOE said the marginal rating was justified because of unfavorable cost and schedule variances, missed milestones, and safety issues. Fluor Daniel assessed its own performance on the project as marginal because of construction safety problems, safety analysis reporting issues, and the large slippage in the schedule resulting from Duke Engineering’s performance. Despite these oversight actions by DOE, in October 1997, the Safety Board reported that DOE officials at Hanford were not sufficiently aware of the technical details of the project to prevent delays from occurring. They also reported that, up to that point, DOE had not been able to accurately determine the status of the project on a routine basis. DOE and its contractors have been working to improve their performance on the project. The December 1997 revision to the project’s schedule was first proposed in August 1997, and it became a catalyst for action on the project. The Safety Board and DOE both conducted reviews that were critical of Duke Engineering’s project management. Fluor Daniel stepped up its oversight activities, and in December 1997, it sent Duke Engineering a letter (called a cure notice) requiring the company to correct problems and improve performance on the project or face possible contractual remedies, including termination for default or recompetition of the subcontract rather than extending it. In response, Duke Engineering prepared a recovery plan and added several new managers, including a new spent fuel project director. The company also made several organizational changes to strengthen the management of the subprojects, establish greater accountability for meeting the project’s schedule, and speed the resolution of technical issues. In addition, Duke Engineering modified its procedures to better identify emerging issues and control changes to the project’s technical baseline. On May 1, 1998, Flour Daniel notified Duke Engineering that it had remedied the problems identified in the cure notice and that if improvements continued, Duke Engineering’s subcontract would probably be extended. DOE’s Manager, Richland Operations Office, however, was concerned with these decisions and asked Flour Daniel for (1) information on its basis for deciding that the problems had been remedied and (2) a recommendation with supporting justification by May 29, 1998, to either extend or recompete the subcontract with Duke Engineering. DOE has also strengthened its oversight of the project and has taken steps to improve its process for reviewing safety documentation. For example, the Assistant Manager for Waste Management at Hanford began devoting more of his time to the project and specifically to overseeing the contractors’ actions. DOE also worked with the contractors to clarify expectations for safety documents and improve the safety review process to reduce the number of duplicative, conflicting, or insignificant comments being made. DOE also increased its use of incentive fees to influence contractor performance. Both Fluor Daniel and Duke Engineering have cost-reimbursement contracts under which the incentive fees are based on the achievement of certain performance objectives. To a more limited extent, this was also true when Westinghouse was the site contractor. For example, for fiscal year 1996, DOE reduced the incentive fees paid to Westinghouse by $625,000 below the $3 million available, primarily because of the company’s delays in finishing the design of the canister storage building. Fees paid to Fluor Daniel and Duke Engineering are also being affected by performance. Although DOE has not completed its evaluation of the fees to be paid to Fluor Daniel and Duke Engineering for fiscal year 1997, Fluor Daniel’s director of contract management estimated that Fluor Daniel and Duke Engineering will earn only about $1.0 million of the $3.2 million fee available on the spent fuel project. For fiscal year 1998, at least $8.1 million in incentives is available for the spent fuel project. DOE officials at Hanford are negotiating a revised project schedule with the regulators that, the Assistant Manager for Waste Management said, includes a contingency to address unforeseen problems so DOE and its contractors are more confident that the dates can be met. In addition, Duke Engineering is exploring opportunities to shorten the project’s time frame by providing the suppliers and lower tier subcontractors with incentives. Despite recent actions to improve contractors’ performance and to make the schedule more realistic, however, remaining uncertainties with the project make it difficult for DOE to predict project outcomes with a high degree of reliability. The following are the main uncertainties: Technical questions and changes continue. For example, concerns persist about the discovery of an aluminum hydroxide coating on some of the fuel. Aluminum hydroxide is about 35 percent water and not easy to remove from the fuel. As radiation breaks down the water in the aluminum hydroxide into its basic elements, it could become a significant source of pressure in the storage canisters. According to DOE’s spent fuel project director, the companies working on this part of the project believe they have a technical solution but it has not yet been approved. Any delay in completing the safety basis for dealing with the increased pressure, or implementing methods to remove the aluminum hydroxide coating, could delay the start of fuel removal. Operational performance has not been proven. According to DOE’s spent fuel project director, other than some limited testing, systems and equipment to clean basin water, handle the spent fuel, and dry the fuel have not been operated to test reliability, the speed of operation, or the adequacy of operator training. She said that operational performance is now the area of greatest uncertainty with the project. The overall project continues to lose ground against the baseline schedule. For example, 1 month after the December 1997 schedule revision, the project was already $22 million over budget and the schedule was beginning to slip. Only 3 months later, in April 1998, the contractors proposed a new schedule adding over 2 more years and $277 million to the project. This latest proposed schedule included about 5 months and $47 million in contingency for unforeseen problems. Even so, it is unclear if all of the delays and new work have been identified, or if the factors contributing to these cost and schedule increases have been discovered and dealt with. Although Duke Engineering has recently made changes to its management team and operating procedures and Fluor Daniel has stepped up its oversight of the project, it is too early to tell if these changes will improve Duke Engineering’s ability to manage the project within cost and schedule constraints. However, DOE’s Assistant Manager for Waste Management said in a March 1998 letter to Fluor Daniel that many of the management problems identified 6 months earlier continue to exist. He also cited a lack of teamwork between Fluor Daniel and Duke Engineering that was interfering with technical integration of the work and overall progress on the project. The management and oversight problems with the spent fuel project at Hanford are examples of a long DOE history of difficulties in managing major construction projects. The projects that do get finished are usually late and well over budget. The spent fuel project at Hanford is no exception—DOE and its contractors have clearly not met cost and schedule targets. The original project schedule contained virtually no flexibility to deal with unforeseen problems, and management and oversight problems exacerbated problems inherent in the original schedule. Although adding time to the schedule and contingency funding and taking steps to improve the project’s management and oversight should improve the probability of meeting future cost and schedule targets, several remaining uncertainties affect DOE’s and its contractors’ ability to reliably predict a completion date or cost. As a result, it is important that DOE continue to be vigilant and aggressive in its oversight of contractors’ activities if the project is to be completed within the latest proposed schedule and cost. We provided DOE with a draft of our testimony for its review and comment. We met with DOE officials, including the Manager of the Richland Operations Office. DOE generally agreed with our testimony, but said the testimony did not clearly state that spent fuel project costs include both capital construction costs and operating costs over the life of the project. We have clarified the testimony accordingly. DOE also said we should recognize that its action to replace Westinghouse as the Hanford site management contractor was due, in part, to DOE’s dissatisfaction with Westinghouse’s performance on the spent fuel project and represented a significant action on DOE’s part. We added this information to our testimony. In addition, DOE noted that it had aggressively identified contractor problems since the inception of the project but had addressed them with Westinghouse informally. Because there was only limited documentation of these actions and contractor management problems continued, we did not add this information to the body of the testimony. Finally, DOE said that our testimony did not make it clear that (1) some of the changes to the project schedule were due to unavoidable increases in project scope as solutions to technical problems were developed and (2) the April 1998 proposed project schedule and cost estimate included a contingency for additional unforeseen problems. We clarified our testimony accordingly. DOE also suggested several technical corrections, which we incorporated as appropriate. We also discussed our findings with Fluor Daniel, Duke Engineering, and Westinghouse. Fluor Daniel and Duke Engineering generally agreed with our findings and provided several technical corrections, which we incorporated as appropriate. Westinghouse disagreed with our findings regarding its performance on the project. Westinghouse officials said that when they turned over the project to the new contractors it was on schedule and within budget, that they had made substantial progress on the project, and that DOE’s evaluations of Westinghouse’s performance were generally positive. We have summarized Westinghouse’s views in our testimony. However, we believe we have described Westinghouse’s performance accurately. Westinghouse also suggested several technical corrections, which we incorporated as appropriate. Our scope and methodology are included as an appendix to our testimony. Thank you, Mr. Chairman and Members of the Subcommittee. That concludes our testimony. We would be pleased to respond to any questions you may have. To determine the risks associated with current spent fuel storage at Hanford and the Department of Energy’s (DOE) plans to reduce those risks, we reviewed DOE and Defense Nuclear Facilities Safety Board reports on current storage conditions and DOE documents describing the spent fuel storage project. We also reviewed the environmental impact statement for the spent fuel project. In addition, we interviewed DOE and contractor officials, as well as officials from the Environmental Protection Agency and the Washington State Department of Ecology. To determine the current status of the project, we reviewed project schedules and cost estimates approved by DOE, as well as related project documents, including safety basis documentation, project status reports, and correspondence. We also interviewed DOE and contractor officials to understand the project’s history, the reasons for the changes to schedule and cost estimates, and the major events leading to those changes. To determine the major causes of schedule delays and cost increases, we reviewed DOE’s, the Safety Board’s, and the contractors’ records and reports. We also interviewed officials from those organizations to obtain their views on the causes of the project’s difficulties. We also reviewed reports on other DOE projects to understand why some of those projects had cost and schedule problems. To determine what steps have been taken to correct the project’s problems, any penalties for poor performance, and the likelihood of meeting the latest schedule and cost estimates, we reviewed the contractors’ records, correspondence, and contract files. We also interviewed DOE and contractor officials. Our review was performed from January through April 1998, in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Department of Energy's (DOE) project to change how DOE stores spent (or irradiated) nuclear fuel from its nuclear reactors at DOE's Hanford site in Washington, focusing on the: (1) risks posed by the storage of the spent nuclear fuel; (2) project's status; (3) major reasons for delays and cost increases; and (4) measures being taken to address these delays and cost increases. GAO noted that: (1) as stored, most of the spent fuel at Hanford presents a risk of releasing nuclear materials to the environment and a consequent danger both to workers and the public; (2) this fuel sits in two water basins that are well beyond their design life and are located just 1,400 feet from the Columbia River; (3) never designed for long-term storage in water, some of the spent fuel has corroded, creating a radioactive sludge that has accumulated in the storage basins; (4) because of leaks in the basins, workers risk exposure to radioactive materials if contaminated water is released to the soil, and the public risks exposure if this water moves through the soil to the river; (5) it is likely that radioactive materials carried in water leaking from one of the basins have reached the river at least twice in the past; (6) although progress has been made in designing and constructing the new facilities, the schedule proposed by the contractors in April 1998 is over 4 years behind the original schedule for completion, and the estimated costs to build and operate the project have almost doubled, to about $1.4 billion; (7) the date to begin moving the spent fuel out of the basins, an important milestone for the project, given the health and safety risks associated with storage conditions, will be delayed until November 2000--almost 3 years beyond the original plan; (8) DOE wanted a compressed schedule for completing the project because of safety concerns at the existing storage basins and because DOE thought that a compressed schedule would improve the contractor's performance; (9) the lack of adequate management by the companies working on the spent fuel project for DOE--Westinghouse Hanford Company, the company that managed the project until 1996, and Duke Engineering, the company now responsible for the project--also contributed to schedule delays and cost overruns; (10) furthermore, oversight by both DOE and its management and integration contractor at the Hanford site--Fluor Daniel Hanford, Inc.--was insufficient to ensure that problems were quickly corrected; (11) recent management changes have been made, and oversight of the project has become more aggressive; (12) in addition, Duke Engineering has replaced several key managers and reorganized its operations and procedures; (13) DOE is negotiating with its regulators--the Environmental Protection Agency and the Washington Department of Ecology--new milestone dates that DOE believes it can meet; and (14) these problems and unresolved technical questions will continue to affect DOE's ability to set reliable targets.
Most people with mental retardation, cerebral palsy, epilepsy, or other developmental disabilities who reside in large public institutions have many cognitive, physical, and functional impairments. Because their impairments often limit their ability to communicate concerns and many lack family members to advocate on their behalf, they are highly vulnerable to abuse, neglect, or other forms of mistreatment. As of 1994, more than 80 percent of residents in large public institutions were diagnosed as either severely or profoundly retarded. More than half of all residents cannot communicate verbally and require help with such basic activities as eating, dressing, and using the toilet. In addition, nearly half of all residents have behavioral disorders and require special staff attention, and almost one-third require the attention of psychiatric specialists. The Congress established the ICF/MR program as an optional Medicaid benefit in 1971 to respond to evidence of widespread neglect of the developmentally disabled in state institutions, many of which provided little more than custodial care. The program provides federal Medicaid funds to states in exchange for their institutions’ meeting minimum federal requirements for a safe environment, appropriate active treatment, and qualified professional staff. In 1994, more than 62,000 developmentally disabled individuals lived in 434 large public institutions certified as ICFs/MR for participation in Medicaid. States operated 392 of these institutions; county and city governments operated 42. The average number of beds in each facility was 170, though facilities range in size from 16 beds to more than 1,000 beds. These institutions provided services on a 24-hour basis as needed. Services included medical and nursing services, physical and occupational therapy, psychological services, recreational and social services, and speech and audiology services. Compared with residents in years past, those in large public institutions today are older and more medically fragile and have more complex behavioral and psychiatric disorders. In recent years, states have reduced the number of people living in these large public ICFs/MR by housing them in smaller, mostly private ICFs/MR and other community residential settings. Large public institutions generally are not accepting many new admissions, and many states have been closing or downsizing their large institutions. HCFA published final regulations for quality of care in ICFs/MR in 1974 and revised them in 1988. To be certified to participate in Medicaid, ICFs/MR must meet eight conditions of participation (CoP) contained in federal regulations. The regulations are designed to protect the health and safety of residents and ensure that they are receiving active treatment for their disability and not merely custodial care. Each CoP encompasses a broad range of discrete standards that HCFA determined were essential to a well-run facility. The CoPs cover most areas of facility operation, including administration, minimum staffing requirements, provision of active treatment services, health care services, and physical plant requirements. (See app. II for a more detailed description of the ICF/MR CoPs.) The eight CoPs comprise 378 specific standards and elements. HCFA requires that states conduct annual on-site inspections of ICFs/MR to assess the quality of care provided and to certify that they continue to meet federal standards for Medicaid participation. The state health department usually serves as the survey agency. These agencies may also conduct complaint surveys at any time during the year in response to specific allegations of unsafe conditions or deficient care. If the surveyors identify deficiencies, the institution must submit a plan of correction to the survey agency and correct—or show substantial progress toward correcting—any deficiency within a specified time period. For serious deficiencies, those cited as violating the CoPs, HCFA requires that the institution be terminated from Medicaid participation within 90 days unless corrections are made and verified by the survey agency during a follow-up visit. If a facility meets all eight CoPs but has deficiencies in one or more of the standards or elements, it may have up to 12 months to achieve compliance as long as the deficiency does not immediately jeopardize residents’ health and safety. HCFA’s 10 regional offices oversee state implementation of Medicaid ICF/MR regulations by monitoring state efforts to ensure that ICFs/MR comply with the regulations. HCFA regional office staff directly survey some ICFs/MR—primarily to monitor the performance of state survey agencies. In addition, regional office staff provide training, support, and consultation to state agency surveyors. The Department of Justice also has a role in overseeing public institutions for people with developmental disabilities. The Civil Rights of Institutionalized Persons Act (CRIPA) authorizes Justice to investigate allegations of unsafe conditions and deficient care and to file suit to protect the civil rights of individuals living in institutions operated by or on behalf of state or local governments. Justice Department investigations are conducted on site by Justice attorneys and expert consultants who interview facility staff and residents, review records, and inspect the physical environment. The Justice Department seeks to determine whether a deviation from current standards of practice exists and, if so, whether the deviation violates an individual’s civil rights. Unlike HCFA, Justice has no written standards or guidelines for its investigations. Justice Department officials told us that the standards they apply are generally accepted professional practice standards as defined in current professional literature and applied by the experts they retain to inspect these institutions. Since the enactment of CRIPA in 1980, Justice has been involved in investigations and enforcement actions in 38 cases involving large public institutions for the developmentally disabled in 20 states and Puerto Rico. As of July 1996, 13 of these investigations remained ongoing, 17 had been closed or resolved as a result of corrections being made, 7 continued to be monitored, and 1 litigated case was on appeal. State Medicaid surveys and Justice Department investigations continue to identify serious deficient care practices in large public ICFs/MR. A few of these practices have resulted in serious harm to residents, including injury, illness, physical degeneration, and death. As of August 1995, 28 of the 434 large public institutions were out of compliance with at least one CoP at the time of their most recent annual state survey. On the last four annual surveys, 122 of these institutions had at least one CoP violation. (See table 1.) These serious violations of Medicaid regulations commonly included inadequate staffing to protect individuals from harm, failure to provide residents with treatment needed to prevent degeneration, and insufficient protection of residents’ rights. Lack of adequate active treatment was the most common CoP violation cited in large public ICFs/MR. Serious active treatment deficiencies were cited 115 times in 84 institutions on the past four annual surveys. Eighteen institutions were cited for this CoP deficiency on their most recent annual survey. Serious active treatment deficiencies cited on the survey reports included, for example, staff’s failure to prevent dangerous aggressive behavior, failure to ensure that a resident with a seizure disorder and a history of injuries wore prescribed protective equipment, and failure to implement recommended therapy and treatment to maintain a resident’s ability to function and communicate. State surveyors also frequently found other CoP violations. They found, for example, residents of one state institution who had suffered severe hypothermia, pneumonia, and other serious illnesses and injuries as a consequence of physical plant deterioration, inadequate training and deployment of professional staff, failure to provide needed medical treatment, drug administration errors, and insufficient supervision of residents. Surveyors determined that the state had failed to provide sufficient management, organization, and support to meet the health care needs of residents and cited the facility for violating the governing body and management CoP. Other CoP violations found during this period include serious staffing deficiencies and client protection violations. Surveyors of one state institution reported deficiencies such as excessive turnover, insufficient staff deployment, frequent caseload changes, and lack of staff training. In this institution, surveyors also found residents vulnerable to abuse and mistreatment by staff and staff who failed to report allegations of mistreatment, abuse, and neglect in a timely manner. Other staff who were known to abuse residents in the past continued to work with residents and did not receive required human rights training. State agencies also conduct complaint surveys in large public ICFs/MR in response to alleged deficiencies reported by employees, advocates, family members, providers, or others. State agencies’ complaint surveys found 48 serious CoP violations from 1991 through 1994. The most frequently cited CoP violations were client protections, cited 22 times, and facility staffing, cited 11 times. (See table 1.) State survey agencies generally certify that facilities have sufficiently improved to come back into compliance with federal CoPs. When survey agencies find noncompliance with CoPs, they may revisit the facility several times before certifying that a violation has been corrected. On average, it takes about 60 days for this to occur. Since 1990, the Justice Department has found seriously deficient care that violated residents’ civil rights in 17 large public institutions for the mentally retarded or developmentally disabled in 10 states. Its investigations have identified instances of residents’ dying, suffering serious injury, or having been subjected to irreversible physical degeneration from abuse by staff and other residents, deficient medical and psychiatric care, inadequate supervision, and failure to evaluate and treat serious behavioral disorders. Justice found, for example, that a resident died of internal injuries in 1995 after an alleged beating by a staff member in one state institution that had a pattern of unexplained physical injuries to residents. In addition, the Department found that in the same facility a few years earlier, a moderately retarded resident suffered massive brain damage and lost the ability to walk and talk due to staff failure to provide emergency care in response to a life-threatening seizure. In another state institution, Justice found facility incident reports from 1992 and 1993 documenting that some residents were covered with ants and one resident was found with an infestation of maggots and bloody drainage from her ear. In another facility, Justice found that a resident was strangled to death in an incorrectly applied restraint in 1989. We reviewed Justice’s findings letters issued since 1990 for 15 institutions. The most common serious problems identified were deficient medical and psychiatric care practices, such as inadequate diagnosis and treatment of illness; inappropriate use of psychotropic medications; excessive or inappropriate use of restraints; inadequate staffing and supervision of residents; inadequate or insufficient training programs for residents and staff; inadequate therapy services; deficient medical record keeping; and inadequate feeding practices. States rarely contest Justice’s findings in court. Only two CRIPA cases involving large public ICFs/MR have been litigated. Department officials told us that the prospect of litigation usually prompts states to negotiate with Justice and to initiate corrective actions. The Department resolved 11 CRIPA cases without having to take legal action beyond issuing the findings letter because the states corrected the deficiencies. In another 12 cases involving large public ICFs/MR, states agreed to enter into a consent decree with the Department. About half of these latter cases required a civil contempt motion or other legal action to enforce the terms of the decree. State survey agencies may be certifying some large public ICFs/MR that do not meet federal standards. Although state survey agencies have the primary responsibility for monitoring the care in ICFs/MR on an ongoing basis, HCFA surveyors and Justice Department investigators have identified more deficiencies—and more serious deficiencies—than have state survey agencies. Federal monitoring surveys conducted by HCFA regional office staff identified more numerous or more serious problems in some large public ICFs/MR than did state agency surveys of the same institutions. According to HCFA, federal surveyors noted significant differences between their findings and those of the state survey agencies in 12 percent of federal monitoring surveys conducted in large public ICFs/MR between 1991 and 1994. HCFA surveyors determine that significant differences exist when, in their judgment, they have identified serious violations that existed at the time of the state agency survey that the state surveyors did not identify. When conducting monitoring surveys, HCFA regional office staff use the same standards and guidelines as state agency surveyors. These federal surveys are designed to assess the adequacy of state certification efforts in ensuring that ICFs/MR meet federal standards, and, for public facilities, the effectiveness of delegating to states the responsibility for surveying and monitoring the care provided in their own institutions. Justice also identified more deficiencies—and more serious deficiencies—in some large public ICFs/MR than did state survey agencies. Although some deficient care practices found by Justice were also noted on state agency surveys of the same institutions, others were not noted by state surveyors. For example, Justice found seriously deficient care that violated residents’ civil rights in 11 state institutions it investigated between 1991 and 1995. Of these 11, state agency surveys cited only 2 for a CoP deficiency even though the state surveys were conducted within a year of Justice’s inspection. The types of serious deficiencies often cited in Justice reports but not in state agency surveys of the same institution included deficiencies in medical practices and psychiatric care, inappropriate use of psychotropic medications, and excessive use of restraints. Several factors have contributed to the inability or failure of HCFA and state survey agencies to identify and prevent recurring quality-of-care deficiencies in some large public ICFs/MR. First, states have not identified all important quality-of-care concerns because of the limited approach and resources of Medicaid surveys. Second, enforcement efforts have not been sufficient to ensure that deficient care practices do not recur. Third, because states are responsible for both delivering and monitoring the care provided in most public institutions, state agency surveys of these institutions may lack the necessary independence to avoid conflicts of interest. Finally, a decline in direct HCFA oversight has reduced HCFA’s ability to monitor problems and help correct them. Although HCFA has recently begun to implement several initiatives to address some of these weaknesses, others remain unresolved. Differences between the approach and resources of Medicaid surveys and Justice Department investigations may explain why Medicaid surveys have not always identified the serious deficiencies that Justice investigations have. State surveyors examine a broad range of facility practices, environmental conditions, and client outcomes to ensure minimum compliance with HCFA standards. Surveys are generally limited to a review of the current care provided to a sample of residents in an institution, are conducted annually, and may last 1 to 2 weeks at a large public institution. In contrast, Justice Department investigations are intended to determine whether civil rights violations exist. They generally focus on deficient care practices about which Justice has received specific allegations, often related to medical and psychiatric care. Such investigations may include a review of care provided to all individuals in a facility, extend over several months, and include an examination of client and facility records covering several years to assess patterns of professional practice. The professional qualifications and expertise of individuals conducting state agency surveys and Justice’s investigations also differ. State surveyors are usually nurses, social workers, or generalists in a health or health-related field. Not all have expertise in developmental disabilities. Although HCFA recommends that at least one member of a state survey team be a qualified mental retardation professional (QMRP), 17 states had no QMRPs on their survey agency staffs as of March 1996. Justice’s investigators are usually physicians, psychiatrists, therapists, and others with special expertise in working with the developmentally disabled. According to HCFA and Justice officials, Justice Department investigators generally can better challenge the judgment of professionals in the institution regarding the care provided to individual residents than can state surveyors. HCFA officials acknowledged that the differences between Medicaid surveys and Justice investigations could explain some of the differences between their findings. They told us, however, that they have begun to implement several changes to the survey process to increase the likelihood that state surveys will identify all serious deficiencies. These include new instructions to surveyors for assessing the seriousness of deficiencies, increased training for surveyors and providers, and implementation of a new survey protocol intended to focus more attention on critical quality-of-care elements and client outcomes. The new survey protocol reduces the number of items that must be assessed each year and places greatest emphasis on client protections, active treatment, client behavior and facility practices, and health care services. The new protocol gives surveyors latitude, however, to expand the scope of a facility’s survey if they find specific problems. HCFA’s pilot test of the new protocol showed that although surveyors identified fewer deficiencies overall than with the standard protocol, they issued more citations for the most serious CoP violations. HCFA is conducting training for state surveyors and providers on this new protocol and plans to monitor certain aspects of its implementation. Even when state survey agencies identify deficiencies in large public ICFs/MR, state enforcement efforts do not always ensure that facilities’ corrections are sufficient to prevent the recurrence of the same serious deficiencies. Although state survey agencies almost always certify that serious deficiencies have been corrected, they subsequently cite many institutions for the same violations. For example, between December 1990 and May 1995, state survey agencies cited 33 large public institutions for violating the same CoP on at least one subsequent survey within the next 3 years. Moreover, 25 were cited for violating the same CoP on one or more consecutive surveys. HCFA officials told us that the sanctions available to the states under Medicaid have not always been effective in preventing recurring violations and are rarely used against large public ICFs/MR. Only two possible sanctions are available under the regulations for CoP violations: suspension—that is, denial of Medicaid reimbursement for new admissions—or termination from the program. Medicaid regulations do not contain a penalty for repeat violations that occur after corrective action. Denying reimbursement to large public institutions for new admissions is not a very relevant sanction because many of these institutions are downsizing or closing and are not generally accepting many new admissions. Furthermore, terminating a large institution from the Medicaid program is counterproductive because denying federal funds may further compromise the care of those in the institution. No large public institutions were terminated from Medicaid for reasons of deficient care and not reinstated from 1990 through 1994, the period for which data were readily available. HCFA officials told us that they were particularly concerned about institutions where surveyors found repeat violations of the same CoPs. Although the officials have not explored the usefulness of other sanctions or approaches to enforcement for large public ICFs/MR, they told us that the newly implemented survey procedure was intended to better identify the underlying causes of facility deficiencies, possibly reducing repeat violations. A potential conflict of interest exists because states both operate large public ICFs/MR and certify that these institutions meet federal standards for Medicaid participation. States can lose substantial funds if care is found to be seriously deficient and their institutions lose Medicaid certification. The state survey agency, usually a part of a state’s department of health, conducts surveys to determine whether ICFs/MR are in compliance with quality standards. It reports and makes its recommendation to the state Medicaid agency, which makes the final determination of provider certification. Medicaid rules do not require any independent federal or other outside review for a state’s ICF/MR to remain certified. HCFA officials, provider representatives, and advocates have expressed concern that this lack of independence compromises the integrity of the survey process. HCFA regional officials told us of instances in which state surveyors were pressured by officials from their own and other state agencies to overlook problems or downplay the seriousness of deficient care in large state institutions. Of concern to the state officials in these instances was the imposition of sanctions that would have cost the state federal Medicaid funds. HCFA regional office staff may mitigate the effects of potential conflicts of interest by training surveyors, accompanying state surveyors during their inspections, or directly surveying the institutions themselves. Direct federal oversight has declined dramatically in recent years despite its importance for independent monitoring of the care provided in large public institutions and the performance of state survey agencies. HCFA’s primary oversight mechanism has been the federal monitoring survey, which assesses state agency determinations of provider compliance. As shown in figure 1, the number of federal monitoring surveys conducted in large public ICFs/MR has declined from 31 in 1990 to only 5 in 1995. HCFA began surveying large public ICFs/MR in response to congressional hearings in the mid-1980s that detailed many instances of poor quality and abusive conditions in Medicaid-certified institutions for the developmentally disabled. In 1985, HCFA hired 45 employees, about half of whom had special expertise in working with persons with developmental disabilities, to conduct direct federal surveys. Officials from HCFA and Justice, providers, and experts told us that this effort helped improve the quality of care in many institutions and stimulate improvements to the state survey process. The recent decline in federal oversight, however, has increased the potential for abusive and dangerous conditions in these institutions. HCFA officials told us that regional office staff have neither conducted sufficient reviews nor acted on facility deficiencies in recent years because of competing priorities and resource constraints. According to these officials, resources previously used for federal surveys of ICFs/MR have been diverted to allow compliance with requirements for increased federal monitoring surveys of nursing facilities and for other reasons. Regional office officials report that these resource constraints have limited their current review efforts to mostly private ICFs/MR of six beds or less. HCFA regional office staff are now less able to identify deficiencies or areas of weakness and to provide targeted training or other support to state surveyors. State survey agencies have recently reported a decline in the number of serious CoP violations in large public ICFs/MR. Yet without direct monitoring, HCFA cannot determine whether this decline is due to real improvements in conditions or to decreased vigilance or competence on the part of state agency surveyors. Although HCFA officials have expressed concern about the current level of direct federal oversight of state survey agencies and large public ICFs/MR, they have no plans to increase resources for these efforts. Instead, HCFA officials told us they are examining ways to better target their limited oversight resources. While they are planning to improve their use of existing data for monitoring purposes, they also plan to develop a system of quality indicators to provide information on facility conditions on an ongoing basis. These officials told us that they expect this system of quality indicators to be operational in about 4 years. The ICF/MR program—intended to provide a safe environment with appropriate treatment by qualified professional staff—serves a particularly vulnerable population of individuals with mental retardation and other developmental disabilities in large public ICFs/MR. Most of these institutions comply with Medicaid quality-of-care standards. Serious deficiencies continue to occur, however, in some institutions despite federal standards, oversight by HCFA and state agencies, and continuing investigations by the Department of Justice. States are the key players in ensuring that ICFs/MR meet federal standards. Although their oversight includes annual on-site visits by state survey agencies to all large public ICFs/MR, these agencies have not identified all instances of seriously deficient care. HCFA reviews and Justice Department investigations have identified some instances of deficient care, including medical care, that were not reported in state surveys. Furthermore, serious deficiencies continue to recur in some of these institutions. Effective federal oversight of large public ICFs/MR and the state survey agencies that inspect them requires that the inspection process be well defined and include essential elements of health care, active treatment, and safety; that enforcement efforts prevent the recurrence of problems; that surveyors be independent; and that HCFA officials have sufficient information to monitor the performance of institutions and state survey agencies. The approach and resources of Medicaid surveys, the lack of effective enforcement mechanisms, the potential conflicts of interest occurring when states are charged with surveying the facilities they operate, and the decline in direct federal monitoring efforts have all weakened oversight of large public ICFs/MR and state survey agencies. HCFA has begun to implement changes to the structure and process of state agency surveys of ICFs/MR. The new approach to surveys may result in identifying more serious deficiencies in large public institutions. This change, and others that HCFA is implementing to more efficiently use limited federal and state resources, may also reduce the impact of some of the other weaknesses we have identified. Nonetheless, the lack of independence in state surveys coupled with little direct federal monitoring remains a particular concern. HCFA needs to strengthen the oversight of its ICF/MR program and collect sufficient information in a timely manner to assess the effectiveness of the new approach in identifying and ensuring the correction of deficient care. To improve HCFA’s oversight of large public ICFs/MR, we recommend that the Administrator of HCFA assess the effectiveness of its new survey approach in ensuring that serious deficiencies at large public ICFs/MR are identified and corrected; take steps, such as enhanced monitoring of state survey agencies or direct inspection of institutions, to address the potential conflict of interest that occurs when states are both the operators and inspectors of ICFs/MR; and determine whether the application of a wider range of enforcement mechanisms would more effectively correct serious deficiencies and prevent their recurrence. HCFA and the Justice Department reviewed a draft of this report and provided comments, which are reproduced in appendixes III and IV. Both agencies generally agreed with the information provided in this report. In their comments, HCFA and Justice recognized the need for improvements in government oversight of the ICF/MR program to ensure adequate services and safe living conditions for residents of large public institutions. HCFA also provided technical comments, which we have incorporated as appropriate. HCFA is implementing a new survey approach and in its comments agreed with our recommendation that it should assess the effectiveness of this approach. To monitor the implementation of its new approach, federal surveyors will accompany state surveyors on a sample of facility surveys, including a minimum of one large public institution in each state. HCFA plans to analyze the results of these monitoring surveys to determine, among other things, whether the new protocol, as designed, is applicable to large public institutions. These are steps in the right direction. Given the serious problems we have identified in ICFs/MR and in state survey agency performance, we believe HCFA must move quickly to determine whether the new survey process improves the identification of serious deficiencies at large public institutions and make appropriate adjustments if it does not. In its comments, HCFA did not propose specific measures to address our recommendation on the potential conflict of interest that occurs when states are both operators and inspectors of ICFs/MR. HCFA stated that resource constraints have resulted in a significant reduction of on-site federal oversight of state survey agencies and of care in large public ICFs/MR. We believe that HCFA’s plan to increase its presence in the field as part of monitoring implementation of the new survey protocol may reduce the impact of potential conflicts of interest at some institutions. However, HCFA must find a more lasting and comprehensive solution to strengthen the independence of the survey process by program improvements or reallocation of existing resources to enhanced monitoring or direct inspection of institutions. HCFA agreed with our recommendation that it determine whether a wider range of enforcement actions would bring about more effective correction of serious deficiencies and prevent their recurrence. HCFA plans to assess whether a wider range of mechanisms would be appropriate for the ICF/MR program on the basis of an evaluation of the impact of alternative enforcement mechanisms for nursing homes due to the Congress in 1997. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date of issue. We will then send copies to the Secretary of the Department of Health and Human Services; the Administrator, Health Care Financing Administration; the U.S. Attorney General; and other interested parties. Copies of this report will be made available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or Bruce D. Layton, Assistant Director, at (202) 512-6837. Other GAO contacts and contributors to this report are listed in appendix V. To address our study objectives, we (1) conducted a review of the literature; (2) interviewed federal agency officials, provider and advocacy group representatives, and national experts on mental retardation and developmental disabilities; (3) analyzed national data from inspection surveys of intermediate care facilities for the mentally retarded (ICF/MR); and (4) collected and reviewed HCFA, state agency, and Justice Department reports on several state institutions. We interviewed officials or representatives from the Health Standards and Quality Bureau of HCFA; HCFA regional offices; the Administration on Developmental Disabilities; the President’s Committee on Mental Retardation in HHS; the Civil Rights Division in the Justice Department; the National Association of State Directors of Developmental Disabilities Services, Inc.; the National Association of Developmental Disabilities Councils; the National Association of Protection and Advocacy Systems; the Accreditation Council on Services for People With Disabilities; and the Association of Public Developmental Disabilities Administrators. Data reviewed at HCFA consisted of automated data and reports submitted by states and regional offices. We analyzed national data from HCFA’s Online Survey, Certification and Reporting System for state and federal ICF/MR surveys conducted between December 1990 and May 1995. Information from surveys conducted before December 1990 was not available at the time of our review. We limited our analysis of HCFA and state data on deficiencies to information about institutions participating in Medicaid as of August 1995. We also reviewed state agency survey reports for 12 large public ICFs/MR, 5 of which were also the subject of Justice investigations between 1991 and 1995. We reviewed Justice’s records since 1990, including findings letters, consent decrees, court filings and actions, and other supporting documentation and analyses related to enforcement of the Civil Rights of Institutionalized Persons Act. We conducted our work between May 1995 and July 1996 in accordance with generally accepted government auditing standards. Following are the eight conditions of participation for intermediate care facilities for the mentally retarded (ICF/MR), as prescribed by the Secretary and contained in federal regulations. The standards that must be addressed under this condition include the following: the facility must (1) have a governing body that exercises general control over operations; (2) be in compliance with federal, state, and local laws pertaining to health, safety, and sanitation; (3) develop and maintain a comprehensive record keeping system that safeguards client confidentiality; (4) enter into written agreements with outside resources, as necessary, to provide needed services to residents; and (5) be licensed under applicable state and local laws. To comply with this condition, the facility must (1) undertake certain actions and provide mechanisms to protect the rights of residents; (2) adequately account for and safeguard residents’ funds; (3) communicate with and promote the participation of residents’ parents or legal guardians in treatment plans and decisions; and (4) have and implement policies and procedures that prohibit mistreatment, neglect, or abuse of residents. Standards for facility staffing include requirements that (1) each individual’s active treatment program be coordinated, integrated, and monitored by a qualified mental retardation professional; (2) sufficient qualified professional staff be available to implement and monitor individual treatment programs; (3) the facility not rely upon residents or volunteers to provide direct care services; (4) minimum direct care staffing ratios be adhered to; and (5) adequate initial and continuing training be provided to staff. Regulations specify that each resident receive a continuous active treatment program that includes training, treatment, and health and related services for the resident to function with as much self-determination and independence as possible. Standards under this condition include (1) procedures for admission, transfer, and discharge; (2) requirements that each resident receive appropriate health and developmental assessments and have an individual program plan developed by an interdisciplinary team; (3) requirements for program plan implementation; (4) adequate documentation of resident performance in meeting program plan objectives; and (5) proper monitoring and revision of individual program plans by qualified professional staff. Standards under this condition specify that the facility (1) develop and implement written policies and procedures on the interaction between staff and residents and (2) develop and implement policies and procedures for managing inappropriate resident behavior, including those on the use of restrictive environments, physical restraints, and drugs to control behavior. To meet the requirements of this condition, the facility must (1) provide preventive and general medical care and ensure adequate physician availability; (2) ensure physician participation in developing and updating each individual’s program plan; (3) provide adequate licensed nursing staff to meet the needs of residents; (4) provide or make arrangement for comprehensive dental care services; (5) ensure that a pharmacist regularly reviews each resident’s drug regimen; (6) ensure proper administration, record keeping, storage, and labeling of drugs; and (7) ensure that laboratory services meet federal requirements. Requirements under this condition include those governing (1) residents’ living environment, (2) size and furnishing of resident bedrooms, (3) storage space for resident belongings, (4) bathrooms, (5) heating and ventilation systems, (6) floors, (7) space and equipment, (8) emergency plans and procedures, (9) evacuation drills, (10) fire protection, (11) paint, and (12) infection control. Standards under this condition are designed to ensure that (1) each resident receives a nourishing, well-balanced, and varied diet, modified as necessary; (2) dietary services are overseen by appropriately qualified staff; and (3) dining areas be appropriately staffed and equipped to meet the developmental and assistance needs of residents. In addition to those named above, the following team members made important contributions to this report: James Musselwhite and Anita Roth, evaluators; Paula Bonin, computer specialist; Karen Sloan, communications analyst; George Bogart, attorney-advisor; and Leigh Thurmond and Jamerson Pender, interns. Medicaid: Waiver Program for Developmentally Disabled Is Promising But Poses Some Risks (GAO/HEHS-96-120, July 22, 1996). Financial Management: Oversight of Small Facilities for the Mentally Retarded and Developmentally Disabled (GAO/AIMD-94-152, Aug. 12, 1994). Medicaid: Federal Oversight of Kansas Facility for the Retarded Inadequate (GAO/HRD-89-85, Sept. 29, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the role of the Health Care Financing Administration (HCFA), state agencies, and the Department of Justice (DOJ) in overseeing quality of care in intermediate care facilities for the mentally retarded (ICF/MR), focusing on: (1) deficient care practices occurring in large ICF/MR; (2) whether state agencies identify all serious deficiencies in these institutions; and (3) weaknesses in HCFA and state oversight of ICF/MR care. GAO found that: (1) despite federal standards, HCFA and state agency oversight, and continuing Justice Department investigations, serious quality-of-care deficiencies continue to occur in some large public ICFs/MR; (2) insufficient staffing, lack of active treatment needed to enhance independence and prevent loss of functional ability, and deficient medical and psychiatric care are among those deficiencies that have been frequently cited; (3) in a few instances, these practices have led to serious harm to residents, including injury, illness, physical degeneration, and death; (4) states, which are the key players in ensuring that these institutions meet federal standards, do not always identify all serious deficiencies nor use sufficient enforcement actions to prevent the recurrence of deficient care; (5) direct federal surveys conducted by HCFA and Justice Department investigations have identified more numerous and more serious deficiencies in public institutions than have state surveys; (6) furthermore, even when serious deficiencies have been identified, state agencies' enforcement actions have not always been sufficient to ensure that these problems did not recur; (7) some institutions have been cited repeatedly for the same serious violations; (8) although HCFA has recently taken steps to improve the process for identifying serious deficiencies in these institutions and to more efficiently use limited federal and state resources, several oversight weaknesses remain; (9) moreover, state surveys may lack independence because states are responsible for surveying their own institutions; and (10) the effects of this potential conflict of interest raise concern given the decline in direct federal oversight of both the care in these facilities and the performance of state survey agencies.
Position classification systems are formal methods for determining the relative worth of positions in an organization. The Office of Personnel Management (OPM) has responsibility and authority for federal position classification, except for certain positions in agencies exempted by law. OPM develops and issues classification standards and policies for the federal personnel system. Federal agencies then use these standards and policies to assign grades to positions. The General Schedule (GS) is a 15-grade pay system that covers 442 white-collar occupations and approximately 1.5 million full-time permanent employees in the federal white-collar workforce. Agencies classify most of these positions using either narrative or FES point-factor classification standards. The FES primary standard serves as the framework for individual classification standards written for each occupation. When we began our study, FES standards were in effect for 77 occupations covering approximately 441,000 full-time permanent nonsupervisory employees. Under FES, positions are assigned grades on the basis of the duties, responsibilities, and the qualifications required in terms of the following nine factors: knowledge required by the position includes the skills needed to apply supervisory controls entail the control exercised by the incumbent’s supervisor (not the incumbent’s span of control over subordinates); guidelines provide reference data or impose certain constraints on the incumbent’s use of knowledge; include desk manuals, established procedures and policies, traditional practices, and materials such as dictionaries, etc.; and vary by specificity, applicability, and availability; complexity consists of the nature, variety, and intricacy of tasks and the difficulty and originality involved in performing the work; scope and effect encompass the relationship between the breadth and depth of the work and the effect of work products and services within and outside the organization; personal contacts refer to contacts with persons not in the supervisory purpose of contacts ranges from factual exchanges of information to situations involving significant or controversial issues and differing viewpoints, goals, or objectives; physical demands include physical characteristics such as agility, dexterity, and physical exertion—such as climbing, lifting, pushing, etc.; and work environment pertains to the risks and discomforts in the employee’s physical surroundings or work assigned. As shown in table 1, each factor is broken down into graduated levels. Factors are composed of from three to nine levels with most having four to six levels. One factor—knowledge required by the position—has the largest range of points, from 50 for level 1 to 1,850 for level 9. To determine a position’s GS grade, the agency typically compares either the position description or information gathered through a “desk audit” with the nine FES factors described in the classification standard. After all nine factors are evaluated, the points for all factors are totaled, and the total for each position is converted to a GS grade by using a conversion table (see table 2). A contractor, with our supervision, developed a job content questionnaire on the basis of the FES primary standard. The contractor distributed it to a stratified random sample of 2,060 pairs of incumbents and their supervisors and received responses from 1,639 incumbent/supervisor pairs, which represents an overall response rate of about 80 percent. Because individual federal positions are classified through labor-intensive desk audits, it was not practical for us to study the majority of FES occupations using traditional classification methods. The contractor did, however, do desk audits of 78 judgmentally selected positions and compared the audit results with the related questionnaire responses. These comparisons indicated a fairly high correlation across occupations. The validity coefficient between the GS grades resulting from the desk audits and those from the questionnaires was .80 when the incumbent and supervisor questionnaire responses were averaged. We considered this correlation to be sufficiently high to validate the use of the job content questionnaire results for comparing groups of occupations.However, we cannot attest to the questionnaire’s validity when used across GS grades within occupations, for specific occupations, or for individual positions. Further, our study was not designed to determine whether, and if so why, the job content reflected by the questionnaires differed from that contained in the official job descriptions, which are the bases for the actual GS grades. We used an OPM database to identify a representative segment of incumbents and their supervisors in occupations with high, medium, or low representations of women or minorities. We defined occupations with high, medium, and low female representation as those in which women represented 70 percent or more, 31 to 69 percent, and 30 percent or less of incumbents, respectively. We considered occupations in which minorities represented more than 41 percent, 23 to 41 percent, or less than 23 percent of incumbents as those with high, medium, and low minority representation, respectively. Examples of occupations with high female representation included secretary, dental hygienist, medical clerk, dental assistant, and occupational therapist. Examples of occupations with high minority representation included border patrol agent and computer and equal employment opportunity specialists. We selected our sample of incumbents and their supervisors from a total of 58 occupations, which collectively represented about 90 percent of the full-time permanent nonsupervisory employees covered by FES when we designed our study. Because preliminary analyses indicated that more variation in undergrading and overgrading existed among occupations with similar gender and minority representation than between groups of occupations with different gender and minority representations, we did our analyses on the 37 occupations for which we had received completed questionnaires from at least 10 or more incumbent/supervisor pairs, for a total of 1,358 pairs or positions. These 37 occupations represented almost one-quarter of the federal white-collar workforce, or about 79 percent of the employees covered by FES, and the results of our study are generalizable only to this population. On the basis of the average total factor level points derived from the questionnaires completed by the incumbent and the supervisor, we determined the GS grade for each position using the FES conversion table (see table 2). We compared the questionnaire grade with the incumbent’s actual GS grade and considered the position to be aptly, or appropriately, graded if the questionnaire grade and the actual grade were the same. Otherwise, we considered positions to be overgraded if the actual grade was higher than the questionnaire grade and undergraded if the actual grade was lower than the questionnaire grade. Our study was not designed to permit us to approximate the number of positions that appeared to be overgraded, undergraded, or aptly graded for the portion of the federal workforce to which the results of our study are generalizable; identify the causes of any overgrading or undergrading resulting from either (1) our use of the primary rather than the occupation-specific classification standards, (2) the agencies’ application of classification standards to individual positions, or (3) management decisions regarding the work incumbents were actually assigned versus their job descriptions; determine whether any difference on the basis of gender or minority status was inherent in the design of FES, as a product either of the factors that constitute FES or the allocation of weight or the point range assigned to each factor; or calculate what pay adjustments, if any, should be made. Appendix I contains a more detailed discussion of the job content questionnaire development and validation, sample selection, and response rate calculation. To determine the effects of female or minority representation on relative overgrading and undergrading, we used odds and odds ratios. We calculated the odds of occupations being undergraded rather than aptly graded by dividing the number of positions undergraded by the number aptly graded for groups of occupations. To determine how much more likely one group of occupations was to be undergraded than another group, we divided the odds of being undergraded for one group of occupations by the odds for the other group to form an odds ratio. We used the same procedures for overgrading. We used loglinear analysis to determine how the odds of occupations being overgraded or undergraded versus aptly graded varied (1) across the range of GS grades and (2) when the female or minority representation of the occupation was high, medium, or low. The strength of this particular statistical approach is that multiple variables can be analyzed simultaneously. Appendix II provides more detailed information on the calculation of odds and odds ratios and loglinear models tested and the results obtained. To gain historical perspective, we reviewed previous studies of federal classification issues. We also conferred with federal classification experts to obtain additional insights about possible explanations for specific findings. This study should not be referred to as a “pay equity” study because we examined only the relationship between job content and GS grades assigned through the use of FES and whether that relationship varied with the proportion of women or minorities in occupations. We obtained comments from OPM that are discussed on pages 11 through 12 and presented in appendix III. We did our study from January 1990 to September 1995 in accordance with generally accepted government auditing standards. To evaluate whether gender or minority representation had an effect on the variation between the actual grades and the questionnaire grades, we statistically eliminated the effect of the incumbents’ GS grades. Analysis of the remaining variation showed that occupations with high female representation were 1.77 times more likely to be undergraded rather than aptly graded compared with occupations having a low or medium female representation. Occupations with high minority representation were 2.18 times more likely to be overgraded rather than aptly graded compared with occupations having a low or medium minority representation. Classification experts with whom we consulted about our results and the available literature offered a few occupation-specific hypotheses about possible causes. For example, key occupations with high minority representation that appeared to be overgraded included (1) border patrol agents, (2) equal employment opportunity and compliance specialists, and (3) computer specialists. Previous studies of federal classification issues maintained that FES was ineffective for specialists such as law enforcement related occupations because physical demands and work environment are not highly valued FES factors but are considered significant in these occupations. Although empirical data are lacking, the classification experts we consulted suggested that when equal employment opportunity occupations were established in the 1970s, they involved a heavy workload of cases and, even though not recognized by FES, the GS grades of these occupations may have been increased on that basis. Furthermore, private sector wages may have resulted in overgrading positions in computer-related occupations. Explanations are somewhat less evident regarding occupations with high female representation, which appear more likely to be undergraded. Critics of the federal classification system, including reports from the National Academy of Public Administration and the National Performance Review, have argued that the classification system is overly complex, with too many occupations and GS grades, and that a less rigid system would be more effective. They have recommended fewer occupations and more flexible broad-banded grade structures. OPM officials told us recently that, within the existing statutory framework, OPM plans to revise the classification standards and increase its oversight of various processes including classification. A current proposal for rewriting classification standards would reduce the inventory of 442 white-collar classification standards to about 74. Rather than individual occupations, the new standards would focus on the 22 “job families” of related occupations with separate standards, as applicable, for professional, administrative, technical, and clerical positions. OPM has also established a new oversight office, which, among other things, is planning various governmentwide policy studies. OPM has tentatively allocated about 145 staff years to this effort; most of these resources are located in field offices rather than at headquarters. One of the highest priorities will be a governmentwide classification study, with particular emphasis on determining the accuracy of “border grades”—those grades most likely to be placed at the lower and upper limits of any newly created grade bands. A team is examining options for doing this study, and work on the study is scheduled to begin early in fiscal year 1996. Although FES is considered a more orderly or rigorous method than other federal classification systems, our study identified differences in the grading of positions in occupations with high representations of women or minorities. The National Performance Review and other studies suggest that the current classification systems should be abandoned in favor of more flexible, broad-banded systems. The results of our study indicate that it is important that policymakers closely monitor any new systems to ensure that (1) unintended disparities are identified so that they can be corrected and (2) the national policy underlying the current classification system—that jobs be classified so that pay is equal for substantially equal work—is being satisfactorily achieved. Since OPM is in the process of substantially revising the classification and oversight systems, we are making no recommendations in this report. OPM provided written comments on a draft of this report. (See app. III.) OPM took issue with our methodology, saying that it was insufficient to support our findings. OPM also discussed its plans to further explore job classification accuracy issues. OPM took exception with our methodology in two respects. First, it believes we inappropriately used the primary classification standard rather than occupation-specific standards as the basis for our job content questionnaire. OPM said that the primary standard was designed to be used as an overall outline wherein more specific standards would be developed, not as a basis for evaluating individual positions. Although we used the primary standard rather than the occupation-specific standards in the development of our methodology, we examined the specific standard for several occupations in our sample to see if we could identify any ways in which the use of the occupation-specific standard might have led to a different result. We did not identify any such effect. We also asked OPM’s classification experts to identify occupations in our sample for which the specific standards were, in their view, sufficiently different from the primary standard that our results would have been affected in identifiable ways. They did not identify any such occupations. Thus we continue to believe that our use of the primary standard was appropriate and that our methodology produced useful results. OPM’s second exception with our methodology was our use of a job content questionnaire rather than traditional desk audits to assign grades to positions. OPM said that our use of a questionnaire resulted in employees and supervisors, unfamiliar with FES ground rules, being asked to select generic phrases that were not in context and that this in turn resulted in the grades we assigned being less credible than those derived by federal agencies. We acknowledge in our text that actual GS grades are assigned on the basis of occupation-specific standards and that the desk audit is a typical way to assign a grade to a position. As noted in the text, desk audits are labor-intensive, and it was not practical for us to study the majority of FES occupations using traditional classification methods. Because of this, we took care to validate our results. First, we had a contractor do desk audits on positions in a number of occupations in our sample. Next, we compared the results of those desk audits with the questionnaire results for those positions. This comparison showed a fairly high correlation between the GS grades resulting from desk audits and those from the questionnaire. Thus, we believe that our methodology is appropriate to identify patterns of overgrading and undergrading among groups of occupations with different representations of women or minorities. OPM also questioned our study results by comparing them with other studies. More specifically, OPM said that other agencies’ studies and OPM’s classification appeals data indicated lower levels of misclassification than our study. We are unaware of any recent studies or appeals data in which a direct comparison with our study could be meaningful. Although OPM’s most recent report on the overall federal white-collar position classification accuracy indicated a lower level of misclassification than our study, it was published in 1983. We acknowledge that classification appeals also indicate a lower level of misclassification than our study. However, classification experts with whom we consulted said that appeals data are unlikely to represent those federal employees whose positions may be overgraded. As indicated by the Merit System Protection Board (MSPB), almost no one files a classification appeal. Finally, OPM said that it shares with us the need to ensure that the federal government’s classification systems and their applications are fair and unbiased. OPM said that to this end, its newly designed oversight program will have a major focus on ensuring that current and new classifications systems advance the merit principles of equal pay and the efficient and effective use of the federal workforce. OPM said that it expects to decide on the classification review design by the end of fiscal year 1995 and begin work on the review in early fiscal year 1996. We are sending copies of this report to interested Members of Congress and congressional committees that have responsibilities for public sector employment issues, the Director of the Office of Personnel Management, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix IV. If you have any questions about the report, please call me at (202) 512-7824. Using the competitive bid process, we contracted with a national management consultant—Barrett & Associates, Inc., of Akron, Ohio—to (1) develop a job content questionnaire that enabled the contractor to estimate GS grades for positions in our nationwide sample using input from the incumbents and their supervisors, (2) distribute the questionnaire to a sample of incumbent/supervisor pairs attaining at least an 80 percent response rate, (3) develop a semistructured interview guide for completing desk audits, (4) validate the questionnaire through the use of desk audits, and (5) analyze the questionnaire responses. The analyses were done to determine whether a link existed between the GS grades assigned to positions through the use of the Factor Evaluation System (FES) and the number of women or minorities in occupations that we included in our study. We selected the sample of incumbent/supervisor pairs and worked closely with the contractor providing supervision throughout the process. Because the preliminary analyses showed that more variation existed within groups of occupations rather than between those groups, we refined the analyses of the questionnaire responses as described in appendix II. In addition to working with the contractor, we consulted periodically with an ad hoc panel of experts that provided technical guidance during the early design phase of our study. Our panel consisted of Ms. Ruth Rogers, former Chief, Standards Division, Department of Personnel, Government of the District of Columbia, who provided staff assistance to the District’s pay equity study; Dr. Donald Schwab, Professor of Business Research and Industrial Relations, University of Wisconsin-Madison, who has written extensively on the issue of comparable worth; and Dr. Ronnie Steinberg, Professor of Sociology and Women’s Studies, Temple University, who has considerable experience with pay equity issues and job content questionnaires. We also consulted with Dr. David Rindskopf, Professor, Graduate School and University Center, City University of New York, for assistance with our sample selection methodology. After we completed our analyses, we shared our results with and obtained informal comments from five federal officials with classification experience. In addition to a representative from OPM and GAO’s personnel office, we selected three of these experts from a list of recommended candidates provided at our request by the Classification and Compensation Society, a professional organization of federal classifiers and other personnel specialists. The classification experts included a mix of male, female, minority, and nonminority individuals who worked in OPM’s then Personnel Systems and Oversight Group and in the personnel offices of the Federal Aviation Administration, the Department of Defense, the Library of Congress, and the General Accounting Office. We developed a job content questionnaire that enabled us to estimate GS grades for positions covered by FES, a point factor position classification system the federal government used to classify 77 of the 455 white-collar occupations when we designed our study. On the basis of the factor descriptions in the FES primary standard, we constructed questions that allowed us to determine the appropriate level for each factor. Table I.1 shows the points associated with each factor level. Figures I.1 and I.2 show an excerpt from the FES primary standard and the resulting question that we included in our job content questionnaire, respectively. FACTOR 1. KNOWLEDGE REQUIRED BY THE POSITION Factor 1 measures the nature and extent of information or facts which the workers must understand to do acceptable work (e.g., steps, procedures, practices, rules, policies, theories, principles, and concepts) and the nature and extent of the skills needed to apply those knowledges. To be used as a basis for selecting a level under this factor, a knowledge must be required and applied. Knowledge of simple, routine, or repetitive tasks or operations which typically includes following step-by-step instructions and requires little or no previous training or experience; Knowledge of basic or commonly-used rules, procedures, or operations which typically requires some previous training or experience; Mastery of a professional or administrative field to: – Apply experimental theories and new developments to problems not susceptible to treatment by accepted methods; Mastery of a professional field to generate and develop new hypothe- ses and theories; 13. To perform your job in a satisfactory manner, what level of knowledge and skill is needed? (Check one.) 1. Tasks or operations involving specific step-by- step instructions requiring little or no previous training or experience 2. A few basic or commonly used rules, procedures, or operations plus some previous or on-the-job training or experience 8. Mastery of a professional, technical or administrative field to the point where one can apply new hypotheses, theories, or applications to applied problems 9. The GS grade is determined by totaling the points assigned to each of the nine factors and using the grade conversion table shown in table I.2. To determine whether incumbents could readily understand the questionnaire and complete it within a reasonable time period, we completed three sets of pretests. In total, we selected about 30 incumbents and supervisors who were located in the Cleveland metropolitan area and who represented a range of GS grades and occupations covered by FES. After observing them as they completed the questionnaire, we interviewed each incumbent or supervisor to identify how the questionnaire could be improved; we rewrote or edited most of the questions on the basis of the information they provided. During the initial pretest, we also revised our semistructured interview guide for use in completing subsequent desk audits. On the basis of questionnaire responses received from incumbents and their supervisors, we estimated the appropriate GS grade for each position in our sample by (1) determining the appropriate level (and corresponding points) for each of the nine factors, (2) averaging the total points computed separately for the incumbent and the supervisor, and (3) using the FES points-to-grade conversion table to assign a GS grade to the position. We decided to average the input from the incumbent and the supervisor to balance the views of those who would place more reliance on the input of incumbents versus that of supervisors. Table I.3 shows an example of how we estimated the GS grade for one position in our sample. We selected our nationwide sample from the full-time permanent nonsupervisory incumbents in the 77 occupations for which an FES standard had been in existence for at least 1 year when we completed our study design in May 1992. Because of limited resources, we excluded incumbents who (1) were stationed outside of the continental United States or (2) worked for agencies with less than 500 employees. Each of the 77 occupations included from 35 to 91,769 incumbents for a total of 441,189 full-time permanent nonsupervisory incumbents. Women constituted about 61 percent of the total workforce covered by FES and minorities, approximately 32 percent. We used OPM’s Central Personnel Data File (CPDF), updated as of June 1992, to select our sample. These were the most recent data available when we designed our study. We did not independently verify the accuracy of this database. We defined each of the 77 occupations covered by FES as having a high, medium, or low representation of women or minorities. On the basis of the current literature, we defined occupations with high, medium, and low female representation as those in which women represented 70 percent or more, 31 to 69 percent, and 30 percent or less of the incumbents, respectively. The literature contained no consistent basis on which to define minority representation; therefore, on the basis of our past work, we initially adopted a standard of more than 48 percent (150 percent of the 32 percent minority workforce representation) to define high minority representation; less than 16 percent (50 percent of 32 percent), low minority representation; and 16 to 48 percent, medium minority representation. However, in order to include in our sample at least two occupations with each possible mix of gender and minority representation (e.g., low female, high minority representation), we found it necessary to adjust our definitions for occupations with high, medium, and low minority representation to those in which minorities represented more than 41 percent, 23 to 41 percent, and less than 23 percent of incumbents, respectively. Table I.4 shows the distribution of the 77 occupations in a nine-cell matrix configured according to female and minority representation. From the 77 occupations covered by FES, we selected 58 on the basis of four characteristics—number of incumbents, PATCO (professional, administrative, technical, clerical, and other) category, job family, and GS grade distribution. First, we chose occupations in each matrix cell with the largest number of incumbents. Second, we examined the PATCO category of the occupations selected within each cell and chose additional occupations to include all categories. Third, we reviewed the job families represented by the occupations already selected and chose additional occupations increasing the number of families included in our study to 17 out of a total of 22. Finally, we examined the GS grade distributions of the occupations selected and chose occupations that included grades not previously selected within each row and column of the matrix. Table I.5 shows the female and minority representation, PATCO category, and number of full-time permanent nonsupervisory incumbents for each of the 77 occupations covered by FES when we selected our sample as well as the 58 occupations selected for inclusion in our sample. Table I.5: Female and Minority Representation, PATCO Category, and Number of Incumbents for the 77 Occupations Covered by FES (Budget clerical and assistance) (High) (Medium) (C,T) (3,937) (Low) (Low) (P) (3,817) (High) (Medium) (C,T) (3,524) (Environmental engineering) (Low) (Low) (P) (3,461) Safety and occupational health management (continued) (High) (Medium) (C,T) (2,935) (Nuclear engineering) (Low) (Low) (P) (2,873) Medical supply aide and technician Medium (Low) (Low) (P) (1,850) (Soil conservation technician) (Medium) (Low) (T) (1,835) (Agricultural commodity grading) (Low) (Low) (T) (1,438) (Low) (Low) (P) (1,358) (Respiratory therapist) (Medium) (High) (T) (496) (Wildlife refuge management) (Low) (Low) (P) (365) (Low) (Low) (A) (322) (Motor carrier safety) (Low) (Low) (A) (284) (continued) (Medical records administration) (High) (Medium) (A) (170) (Crop insurance administration) (Low) (Low) (A) (109) (Crop insurance underwriting) (Low) (Low) (A) (54) (Customs patrol officer) (Low) (High) (O) (47) (Civil rights analysis) (Medium) (High) (A) (35) (28,910) We selected incumbents for our sample from the 58 occupational series not shown in parentheses. We selected no more than 11 occupations for each matrix cell because we planned to validate the job content questionnaire by completing two sets of 100 desk audits that would each (1) include at least one position from each occupation in our sample and (2) be evenly distributed among the 9 matrix cells. Table I.6 shows the distribution of the 58 occupations selected for inclusion in our sample by matrix cell. After selecting the 58 occupations, we completed a pilot test by distributing the job content questionnaire to 389 pairs of incumbents and their supervisors who were located in either the Cleveland, OH, or Washington, D.C. areas and who represented a range of GS grades and occupations included in our sample. Because the CPDF does not identify a specific address or supervisor for incumbents, we forwarded the questionnaires to the appropriate agency personnel offices for distribution. Pairs of trained job analysts composed of at least one female and one minority completed the first set of desk audits for 100 of the 257 positions for which the incumbent and the supervisor returned a questionnaire. Of the 100 positions, we included in our analyses the 84 positions for which the incumbent and the supervisor provided complete responses to all questionnaire items. To ensure the independence of the validation process, the contractor instructed the job analysts not to review the incumbent/supervisor responses to the questionnaire for any position before they completed a desk audit. On the basis of the pilot test, we determined that (1) incumbents at higher GS grades tended to undervalue their positions when compared to the desk audit, while incumbents at lower grades tended to overvalue their positions and (2) the grades of incumbents included in our study were not evenly distributed across the matrix rows and columns. As a result, the validity coefficient between GS grades estimated on the basis of the desk audits versus incumbent/supervisor questionnaire responses was .74. Because the combination of the two effects threatened to distort the results of our planned statistical comparisons, we stratified our sample by classifying the incumbents of the 58 occupations into one of seven groups, or strata, on the basis of GS grade: grades 1 to 4, grade 5, grade 6, grades 7 to 8, grades 9 to 10, grade 11, and grades 12 to 15. We then determined the number of incumbent/supervisor pairs needed within each matrix cell using a complex balancing design to ensure that the row and column totals would be about equal. This stratified sampling strategy enabled us to balance the grade distribution of incumbents selected within each row and column of the matrix, and thus, eliminate any grade level effect in subsequent analyses. We randomly selected our sample of full-time permanent nonsupervisory incumbents from the 58 occupations that met the criteria for each matrix cell and GS grade stratum and distributed the job content questionnaire to 2,233 incumbents and their supervisors. Because of the random selection process, we did not select incumbents from 3 of the 58 occupational series—import specialist, education program, and correspondence clerk. Table I.7 shows the distribution of the 2,233 incumbents in a matrix configured according to female and minority representation and strata. In addition to the sample of 2,233 incumbent and supervisor pairs, we also selected a supplemental sample of 303 pairs to enable us to complete a second set of desk audits expeditiously. We selected the supplemental sample from those incumbents in the 58 occupations who were located in one of three geographical areas—Washington, D.C., Dayton/Cincinnati, OH, or Los Angeles, CA—to represent incumbents working in the eastern, central, and western United States. Because we did not randomly select the pairs in the supplemental sample from all incumbents working in the 58 occupations covered by FES, we did not include their questionnaire responses in our response rate calculation or analyses. We forwarded the job content questionnaires to the appropriate agency personnel offices for distribution to each of the 2,233 incumbent/supervisor pairs. For 179 pairs, agency officials notified us that either the incumbent or the supervisor no longer held the position indicated by the CPDF or were located outside of the United States. As planned, we eliminated these pairs from our study. Of the remaining 2,054 pairs, we received questionnaires from 1,633 pairs of respondents, for a response rate of 80 percent. Table I.8 shows the disposition of each of the 2,233 incumbent/supervisor pairs in our sample. For 201 of the 1,633 pairs of respondents, either the incumbent, the supervisor, or both left some questionnaire items unanswered. Therefore, we included the remaining 1,432 pairs in our preliminary analyses. On the basis of the incumbent/supervisor questionnaire responses, we estimated the GS grade for each of the 1,432 positions and compared this grade with the position’s actual grade. We considered positions to be appropriately graded when the actual grade equalled the questionnaire grade; otherwise, we considered the position to be overgraded if the actual grade was higher than the questionnaire grade; and undergraded if the actual grade was lower than the questionnaire grade. When we designed this study, we planned to compare the overgrading and undergrading among groups of occupations with different gender and minority representations or among matrix rows and columns. However, preliminary analyses showed that more variation in overgrading and undergrading existed within the same row or column rather than between different rows or columns. For this reason, we analyzed the 37 occupations for which we had received completed questionnaires from at least 10 or more incumbent/supervisor pairs. We did not include the remaining 18 occupations in our analyses because we could not obtain reliable estimates of overgrading and undergrading for an occupation with less than 10 pairs of respondents. The 37 occupations included 1,358 incumbent/supervisor pairs and represented about 79 percent of the incumbents covered by FES. Table I.9 shows the gender and minority representation of the 37 occupations included in our study. We validated our job content questionnaire by using a second set of desk audits. To simplify selecting positions that represented a broad range of grades, we combined our seven strata into three groups: high (GS-11 to 15), medium (GS-6 to 10), and low (GS-1 to 5). We then scheduled interviews with incumbent/supervisor pairs, which allowed us to complete desk audits that were distributed evenly across the three GS grade ranges, the three selected geographical locations, and the nine matrix cells. Again, to ensure the independence of the validation process, the contractor instructed the job analysts not to review the incumbent/supervisor responses to the questionnaire for any position before they completed a desk audit. Although we completed 100 desk audits, we validated the questionnaire on the basis of the 78 positions that represented the 37 occupations included in our analyses and for which the incumbent and supervisor provided complete responses to the questionnaire. For the 78 positions, we completed 24 desk audits for high, 35 for medium, and 19 for low graded positions; and 37 desk audits in Washington, D.C.; 21 in Los Angeles, CA; and 20 in Dayton/Cincinnati, OH. Table I.10 shows the distribution of the 78 positions for which a desk audit was completed by occupation and matrix cell. Table I.10: Distribution of Desk Audits Completed According to Female and Minority Representation for the 37 Occupations Included in the Study (continued) The resulting validity coefficient between GS grades estimated on the basis of the 78 desk audits versus incumbent/supervisor questionnaire responses was .80, thus meeting the goal we set when we designed the study. This is the only classification study we are aware of in which desk audits were used to validate the questionnaire results. Table I.11 shows the validity coefficients between GS grades estimated on the basis of desk audits versus the questionnaire responses for incumbent/supervisor pairs, supervisors, and incumbents. For the 37 occupations included in our analyses, we used loglinear statistical techniques to analyze the odds of individual occupations being overgraded or undergraded versus appropriately graded (1) in relation to GS grades and (2) when the female or minority representation of an occupation was high, medium, or low. The strength of this particular statistical approach was that multiple variables could be analyzed simultaneously, thereby enabling us to examine complex relationships in the data. Appendix II provides more detailed information on our loglinear methodology, the loglinear models tested, and the results obtained. This appendix provides additional technical detail on our analytical approach. It contains a general description of loglinear methodology, describes the variables analyzed, and presents the loglinear models tested and the results obtained in each analysis. We used loglinear analyses to examine the relationship between the likelihood of being overgraded or undergraded and (1) the average GS grade of the sampled incumbents by occupation and (2) female and minority representation. We first looked at the relationship between the likelihood of being overgraded or undergraded and female or minority representation without taking the effect of the average GS grade into consideration. For each analysis, we considered the preferred model to be the simplest one that fit the data and could not be significantly improved by more complex models. The preferred model included those components that had statistically significant relationships with effects after we controlled for the influences of other factors. Hence, the estimates we obtained were net effects determined after the association of each variable with all other variables had been taken into account or statistically eliminated. On the basis of the preferred model, we estimated the direction and magnitude of the relationships using odds and odds ratios. The odds indicated the likelihood that an outcome would occur given a particular factor or combination of factors, and the odds ratios indicated the size of the effect of the various factors on that likelihood. The more the odds ratio diverges from 1.0, the stronger the relationship. All of our analyses were based on the 37 occupations shown in table II.1. The table also indicates by occupation whether the female or minority representation was high, medium, or low; the average GS grade of the incumbents in the sample; whether the incumbents were overgraded, aptly graded, or undergraded; and the odds of being either overgraded or undergraded versus aptly graded. For each occupation or group of occupations, we derived the odds of being overgraded versus aptly graded by dividing the number of incumbents that were overgraded by the number aptly graded. The odds of being undergraded versus aptly graded were similarly calculated. Average GS grade in sample (continued) All of the results reported in this section are provided for illustrative purposes only. computed ratios, sometimes referred to as odds ratios, by dividing the odds of overgrading or undergrading in one group of occupations by the corresponding odds in another group. For example, occupations with high female representation were about 1.62 times more likely to be undergraded rather than aptly graded when compared with occupations having medium female representation (2.09 / 1.29 = 1.62). The odds ratios shown in the table indicate sizable differences in overgrading and undergrading between occupations with high female representation and those with medium representation but only slight differences between those occupations with medium representation and those with low representation. Table II.2 indicates that all groups of occupations tended to be overgraded versus aptly or undergraded. The focus of our study was determining whether occupations with high female representation differed significantly from those with medium or low female representation. To determine whether the differences in the sample data shown in table II.2 reflected “real” differences in the population and not simply chance or sampling fluctuations, we fit the models shown in the top section of table II.3 to the data shown in table II.1. On the basis of the likelihood ratio chi-square values and degrees of freedom associated with the different models we fit to the observed data, the preferred model was the third one in the upper and lower sections of the table. The third model in the upper section of the table indicated a pronounced tendency for incumbents of jobs with high female representation to be undergraded. The third model in the lower section of the table indicated a pronounced tendency for the incumbents of occupations with high minority representation to be overgraded. Odds of under versus aptly Note 1: The blank spaces in columns indicate that we did not compute odds or odds ratios. Note 2: Due to rounding, expected frequencies do not always add to the total. We also considered the relationship between incumbents being overgraded, aptly graded, or undergraded and minority representation (see table II.5). It should be noted that the effect of the average GS grade was not statistically eliminated, or controlled for, in this comparison. For high, medium, and low minority representation, the number of overgraded and undergraded incumbents outnumbered those appropriately graded. The odds shown in the table indicate the extent to which overgraded or undergraded incumbents outnumbered aptly graded incumbents. Occupations with high minority representation were 1.43 times more likely to be overgraded rather than aptly graded when compared with occupations having medium minority representation (4.48 / 3.13 = 1.43). The odds ratios shown in table II.5 indicate pronounced differences in overgrading and undergrading between occupations with high minority representation and those with medium representation but only slight differences between those occupations with medium representation and those with low representation. Table II.5 indicates that all groups of occupations tend to be overgraded versus aptly or undergraded. The focus of our study was determining whether occupations with high minority representation differed significantly from those with medium or low minority representation. Table II.6 shows the results of fitting the models in the lower section of table II.3 to the data in table II.5. The odds ratios we derived from the expected frequencies indicate the extent of the differences that existed in the population. As previously noted, the preferred model selected was model M3, which indicated that in the population, incumbents of occupations with high minority representation were 1.64 times more likely to be overgraded rather than either aptly or undergraded. The effect of the average GS grade was not statistically eliminated, or controlled for, in this comparison. Odds of under versus aptly Note 1: The blank spaces in columns indicate that we did not compute odds or odds ratios. Note 2: Due to rounding, expected frequencies do not always add to the total. As depicted in figures II.1 and II.2, we determined through preliminary analyses that as the average GS grade increased (1) the odds of overgrading increased significantly and (2) the odds of undergrading decreased slightly. The trend lines through the scatterplots of points, or occupations, shown in figures II.1 and II.2 were obtained from a loglinear model, which allowed the average GS grade to be linearly related to the odds of overgrading and undergrading. Odds (log scale) Odds (log scale) values between model 1 and 2 is highly significant given the difference in degrees of freedom between the two (i.e., 698.33 –228.10 = 470.23 with 72 - 70 = 2 degrees of freedom), we would choose model 2 as the preferred model of the two and conclude that the linear effect of the GS grade (which is present in model 2 but not in model 1) is a significant effect or one that cannot be attributed to sampling fluctuations or chance. Comparisons between other models involve similar calculations and logic. model 2 indicates that female and minority representation accounted for about 6 percent of the variation (.731 - .673 = .058). Therefore, model 11, the preferred model indicates that 73 percent of the variation between the actual GS grades and grades calculated on the basis of the questionnaire was attributable to a combination of average GS grade and female and minority representation. The remaining 27 percent of the variation was not explained by the variables in our study. {O} {D} {O} {GLD} {O} {GLD} {FHD} {O} {GLD} {MHD} {O} {GLD} {FHD} {MHD} {O} {GLD} {FHD} {MHD} {FMD} {O} {GLD} {FHD} {MHD} {MMD} {O} {GLD} {FHD} {MHD} {FHMHD} {O} {GLD} {FHD} {MHD} {FHGLD} {O} {GLD} {FHD} {MHD} {MHGLD} {O} {GLD} {FHDU} {MHDO} O = Occupation (1 of 37 occupations covered by FES that were included in our study). D = Difference between actual GS grade and questionnaire grade (overgraded, aptly, or undergraded). G = Grade (GS-1 to 15). F = Female representation (high, medium, or low). M = Minority representation (high, medium, or low). Subscript L indicates a linear constraint imposed upon the effect of the GS grade on overgrading or undergrading. Subscripts H and M represent dummy variables which contrast high female or minority representation with medium and low representation, and medium female or minority representation with high and low representation, respectively. Subscript U represents a dummy variable which contrasts undergraded with aptly and overgraded. 26 percent appeared to be undergraded, and 232 or 17 percent appeared to be aptly graded. We recognize that, on the surface, these summary results could raise questions about the overall accuracy of the FES classification system. We believe that our results should be viewed with caution in this respect because we did not design our study to assess the overall accuracy of the classification system. Rather, our use of nontraditional methods, i.e., the use of the primary standard rather than occupation-specific standards on which to develop a job content questionnaire coupled with the use of the questionnaire rather than desk audits, and our sample selection methodology were designed to examine the relative assignment of grades among groups of occupations. That is, our study was designed to assess whether there were differences in the likelihood of overgrading or undergrading among groups of occupations that included higher or lower proportions of women and minorities. We validated our design for achieving this objective; we did not validate our design for an objective of expressing an opinion on the overall accuracy of the classification system. Had we undertaken such an assessment, we would have utilized a more extensive strategy to validate the relationship between the questionnaire we developed and the results of more traditional classification tools such as desk audits, or indeed a heavier reliance directly on desk audits, which is how classification accuracy studies are usually conducted. We also recognize that the overall extent of apparent overgrading or undergrading identified may involve some measurement error. However, we have no reason to believe that such error would be more pronounced for any particular group of occupations, for example, occupations with high female representation compared with those with medium or low representation. Thus, we do not believe that our estimates of the differences in odds of overgrading or undergrading for the groups of occupations included in our analysis have been affected by any possible measurement error. Also, the likelihood of a position being overgraded, rather than aptly graded increased as the incumbents’ GS grades increased. However, the incumbents’ grades had virtually no effect on the likelihood that a position was undergraded versus aptly graded. Accordingly, in measuring the odds of overgrading among those groups of occupations, we controlled for (statistically eliminated the effect of) the grade level effect we observed. The remaining variation in the data indicated statistically significant differences among the groups of occupations, and we report on those results. Average GS grade in sample (continued) Table II.9 provides coefficients that indicate the direction and magnitude of the different effects included in model 11. The table shows that the relationship between the average GS grades of occupations and high minority representation and the odds of incumbents being overgraded was statistically significant, while only high female representation was related in a significant way to the odds of incumbents being undergraded. The odds ratio 1.64 tells us that as the average GS grade increased across occupations by one grade, the odds of incumbents being overgraded increased by a factor of 1.64; that is, the likelihood of being overgraded in an occupation in which the average grade of incumbents was GS-10 was 1.64 times as great as the likelihood of being overgraded in an occupation in which the average grade was GS-9. Independent of that, occupations with high minority representation were 2.18 times more likely to be overgraded than occupations with low or medium minority representation. Finally, occupations with high female representation were 1.77 times more likely to be undergraded as occupations with low or medium female representation. The coefficient for the GS grade effect on undergrading was not significantly different from 1.0 (which indicates no effect), and the z-value associated with it implies that the effect can reasonably be due to chance. The following are GAO’s supplemental comments on the Office of Personnel Management’s letter dated September 8, 1995. 1. OPM said that some findings in our study, such as the overgrading and undergrading within the same occupation, were left largely unexplained. As noted in the text, the job content questionnaire was designed and validated to achieve our review objective relative to comparing groups of occupations, and we cannot attest to the questionnaire’s validity when used across GS grades within occupations, for specific occupations, or for individual positions. 2. OPM said that several uninvestigated variables were mentioned in our study, any one of which might account for some or all of the differences between the study results and the agency classification results. As noted in the text, we reported that 73 percent of the variation we found between actual GS grades and those we estimated on the basis of the questionnaire was attributable to the average GS grades and female or minority representation; the remaining 27 percent was not explained by the variables included in our study. Thus, we believe the variables in our study accounted for most of the variation. Aaron, H.J., and C.M. Lougy. The Comparable Worth Controversy. Washington, D.C.: The Brookings Institute, 1986. Acker, J. “Sex Bias in Job Evaluation: A Comparable Worth Issue,” Ingredients for Women’s Employment Policy. Albany: State University of New York Press, 1987. Acker, J. Doing Comparable Worth: Gender, Class, and Pay Equity. Philadelphia: Temple University Press, 1989. Agarwal, N.C. “Pay Discrimination: A Comparative Analysis of Research Evidence and Public Policy in Canada, the United States, and Britain.” Columbia Journal of World Business, Vol. 18 (1983), pp. 28-38. Agarwal, N.C. “Pay Equity in Canada: Current Developments.” Labor Law Journal, Vol. 41 (1990), pp. 518-525. Agassi, J.B. Women on the Job: The Attitudes of Women to Their Work. Lexington, MA: Lexington Books, 1979. A Guide to Pay Equity in Local Government. State of Minnesota Department of Employee Relations. St. Paul, MN: 1990. Aklin, M.T. Office Job Evaluation. Des Plaines, IL: Industrial Management Society, 1971. Aldrich, M., and R. Buchele. The Economics of Comparable Worth. Cambridge, MA: Ballinger Publishing Company, 1986. Alexander, R.A., and G.V. Barrett. “Equitable Salary Increase Judgments Based Upon Merit and Non-merit Considerations: A Cross-national Comparison.” International Review of Applied Psychology, Vol. 31 (1982), pp. 443-454. Algina, J. “Comments on Bartko’s ‘On Various Intraclass Correlation Reliability Coefficients.’” Psychological Bulletin, Vol. 85 (1978), pp. 135-138. Allison, E.K. “Sex Linked Differentials in the Beauty Industry.” The Journal of Human Resources, Vol. 11 (1976), pp. 383-390. Almquist, E.M. Minorities, Gender, and Work. Lexington, MA: Lexington Books, 1979. Anderson, C.H. and D.B. Corts. Development of a Framework for a Factor-ranking Benchmark System for Job Evaluation. U.S. Civil Service Commission, Personnel Research and Development Center, Project 6B132A. Washington, D.C.: 1973. Andrews, I.R., and E.R. Valenzi. “Overpay Inequity or Self-Image as a Worker: A Critical Examination of an Experimental Induction Procedure.” Organizational Behavior and Human Performance, Vol. 5 (1970), pp. 266-276. A Job Evaluation Study of Selected Job Classes of the State and Counties of Hawaii. Arthur Young International. Honolulu, HA: 1987. Arvey, R.D. “Sex Bias in Job Evaluation Procedures.” Personnel Psychology, Vol. 39 (1986), pp. 315-335. Arvey, R.D., and M. Begalla. “Analyzing the Homemaker Job Using the Position Analysis Questionnaire (PAQ).” Journal of Applied Psychology, Vol. 60 (1975), pp. 513-517. Arvey, R.D., et al. “Potential Sources of Bias in Job Analytic Processes.” Academy of Management Journal, Vol. 25 (1982), pp. 618-629. Arvey, R.D., and J.A. Fossum. “Application of Personnel Assessment Concepts and Methods in Job Evaluation Procedures.” Personnel Assessment Monographs, Vol. 1 (1986). Arvey, R.D., and K.E. Holt. “Cost Impact of Alternative Comparable Worth Strategies.” Presented at the 94th Annual Convention of the American Psychological Association, Washington, D.C.: 1986. Arvey, R.D., S.E. Maxwell, and L.M. Abraham. “Reliability Artifacts in Comparable Worth Procedures.” Journal of Applied Psychology, Vol. 70 (1985), pp. 695-705. Arvey, R.D., E.M. Passino, and J.W. Lounsbury. “Job Analysis Results as Influenced by Sex of Incumbent and Sex of Analyst.” Journal of Applied Psychology, Vol. 62 (1977), pp. 411-416. Ash, P. “The Reliability of Job Evaluation Factors.” Journal of Applied Psychology, Vol. 32 (1948), pp. 313-320. Ash, P. “A Statistical Analysis of the Navy’s Method of Position Evaluation.” Public Personnel Review, Vol. 11 (1950), pp. 130-138. Ash, R.A. “Empirical Validity Evidence for a Task-based Job-component Method of Job Analysis.” Presented at the Annual Conference of the International Personnel Management Association Assessment Council, Orlando, FL: 1989. Ashenfelter, O., and J.D. Mooney. “Graduate Education, Ability, and Earnings.” Review of Economics and Statistics, Vol. 50 (1968), pp. 78-86. Asher, M., and J. Popkin. “The Effect of Gender and Race Differentials on Public-private Wage Comparisons: A Study of Postal Workers.” Industrial and Labor Relations Review, Vol. 38 (1984), pp. 16-25. Astin, H.S., and A.E. Bayer. “Sex Discrimination in Academe.” Educational Records, Vol. 53 (1972), pp. 101-118. A Study of Comparable Worth Within the State of Maine Classification and Compensation System. William M. Mercer-Meidinger, Inc. New York: 1986. Atchison, T., and W. French. “Pay Systems for Scientists and Engineers.” Industrial Relations, Vol. 7 (1967), pp. 44-56. Auster, E.R. “Task Characteristics as a Bridge Between Macro- and Microlevel Research on Salary Inequality Between Men and Women.” Academy of Management Review, Vol. 14 (1989), pp. 173-193. Avolio, B.J., K.G. Kroeck, and B.R. Nathan. “Category Accessibility, Ratings of Prototypicality, and Perceptions of Managerial Role Attributes.” Psychological Reports, Vol. 62 (1988), pp. 195-210. Azevedo, R.E., and L. Roth. “Canadian-United States Experience with Comparable Worth: The View From Minnesota.” Labor Law Journal, Vol. 41 (1990), pp. 531-534. Baer, J.A. The Chains of Protection. Westport, CT: Greenwood Press, 1978. Bailey, W.R., and A.E. Schwenk. “Wage Rate Variation by Size of Establishment.” Industrial Relations, Vol. 19 (1980), pp. 192-198. Barnes, W.E., and F.B. Jones. “Differences in Male and Female Quitting.” Journal of Human Resources, Vol. 9 (1974), pp. 439-451. Barnett, E. “Comparable Worth and the Equal Pay Act: Proving Sex-based Wage Discrimination Claims after County of Washington v. Gunther.” Wayne Law Review, Vol. 28 (1982), pp. 1669-1700. Baron, J.M., A. Davis-Blake, and W.T. Bielby. “The Structure of Opportunity: How Promotion Ladders Vary Within and Among Organizations.” Administrative Science Quarterly, Vol. 31 (1986), pp. 248-273. Barrett, G.V. “Salary Increase Models: A Cross-national Comparison.” Research Seminar on Comparitive Managerial Behavior Research at the International Institute of Management, Berlin: 1971. Barrett, G.V. “Job Evaluation as a Fair and Effective Technique for Establishing Pay.” Presented at the Personnel/Human Resource Management Division, 1990 National Meeting of the Academy of Management, San Francisco, CA: 1990. Barrett, G.V. “Comparison of Skill-based Pay with Traditional Job Evaluation Techniques.” Human Resource Management Review, Vol. 1 (1991), pp. 97-105. Barrett, G.V. “Task Design, Individual Attributes, Work Satisfaction, and Productivity.” Work Organization Research: American and European Perspectives. Kent, OH: Kent State University Press, 1978. Barrett, G.V., et al. “Frequently Encountered Problems in the Application of Regression Analysis to the Investigation of Sex Discrimination in Salaries.” Public Personnel Management, Vol. 15 (1986), pp. 143-157. Barrett, G.V., and B.M. Bass. “Cross-cultural Issues in Industrial and Organizational Psychology,” Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally College Publishing Co, 1976. Barrett, G.V., F.H. Dambrot, and G.R. Smith. The Relationship Between Individual Attributes and Job Design: Review and Annotated Bibliography. University of Akron, Department of Pyschology, Technical Report 6. Akron, OH: 1975. Barrett, G.V., and D. Doverspike. “Fundamental Errors and Erroneous Beliefs in Using Regression Analysis as a Test for Sex Discrimination.” Law and Psychology Review, Vol. 13 (1989), pp. 1-24. Barrett, G.V., and D. Doverspike. “Another Defense of Point-factor Job Evaluation.” Personnel, Vol. 66 (1989), pp. 33-36. Barrett, G.V., and S.B. Morris. “The American Psychological Association’s Amicus Curiae Brief in Price Waterhouse v. Hopkins: The Values of Science Versus the Values of the Law,” Law and Human Behavior, Vol. 17 (1993), pp. 201-215. Barrett, G.V., and D.M. Sansonetti. “Issues Concerning the Use of Regression Analysis in Salary Discrimination Cases.” Personnel Psychology, Vol. 41 (1988), pp. 503-516. Barrett, G.V., W.S. Whittaker, and R.A. Alexander. “Equitable Salary Increases: A Cross-national Comparison.” Abstract Guide of 20th International Congress of Psychology, Tokyo: 1972. Bar-tal, D., et al., eds. Stereotyping and Prejudice, New York: Springer-Verlag, 1989. Basow, S.A. Gender Stereotypes: Traditions and Alternatives, 2nd ed. Pacific Grove, CA: Brooks/Cole Publishing Co., 1986. Bass, B.M., and B.J. Avolio. The Transformational and Transactional Leadership Behavior of Management Men and Women as Described by the Men and Women Who Directly Report to Them. University of New York at Binghamton, Center for Leadership Studies/School of Management, Report 91-3. Binghamton, NY: 1991. Beatty, R.W., and J.R. Beatty. “Job Evaluation and Discrimination: Legal, Economic, and Measurement Perspectives on Comparable Worth and Women’s Pay,” Women In The Workforce. New York: Praeger Publishers, 1982. Beatty, R.W., and J.R. Beatty. “Some Problems with Contemporary Job Evaluation Systems,” Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Becker, M.E. “Comparable Worth in Antidiscrimination Legislation: A Reply to Freed and Polsby.” University of Chicago Law Review, Vol. 51 (1984), pp. 1112-1134. Belcher, D.W. “Pay Equity or Pay Fairness?” Compensation Review, Vol. 11 (1979), pp. 31-37. Bellace, J.R. “The Impact of the American and British Equal Pay Guarantee on Job Evaluation.” Applied Psychology An International Review, Vol. 36 (1987), pp. 9-24. Bellak, A.O., M.W. Bates, and D.M. Glasner. “Job Evaluation: Its Role in the Comparable Worth Debate.” Public Personnel Management, Vol. 12 (1983), pp. 418-424. Bellar, A.H. “The Economics of Enforcement of an Antidiscrimination Law: Title VII of the Civil Rights Act of 1964.” Journal of Law and Economics, Vol. 21 (1978), pp. 359-380. Bellows, R.M., and M.F. Estep. “Job Evaluation Simplified: The Utility of the Occupational Characteristics Checklist.” Journal of Applied Psychology, Vol. 32 (1948), pp. 354-359. Benge, E.J. “Statistical Study of a Job Evaluation Point System.” Modern Management, (1947), pp. 17-23. Benge, E.J., S.L.H. Burk, and E.N. Hay. Manual of Job Evaluation: Procedures of Job Analysis and Appraisal. New York and London: Harper and Brothers, 1941. Benson, P.G., and J.S. Hornsby. “The Politics of Pay: The Use of Influence Tactics in Job Evaluation Committees.” Group and Organization Studies, Vol. 13 (1988), pp. 208-224. Benson, P.G., and J.S. Hornsby. “Job Evaluation Committees as Small Groups: Implications of Group Dynamics for Fairness in Pay.” International Journal of Public Administration, Vol. 14 (1991), pp. 845-869. Berelson, B. Content Analysis in Communication Research. Glencoe, IL: Folcroft Press, 1970. Bergmann, B.R. The Economic Emergence of Women. New York: Basic Books, 1986. Berheide, C.W., et al. Minorities and Pay Equity in New York State. Center for Women in Government, Working Paper 17. Albany, NY: 1986. Berheide, C.W., et al. “A Pay Equity Analysis of Female-dominated and Disproportionately Minority New York State Job Titles.” Humanity and Society, Vol. 11 (1987), pp. 465-485. Bertaux, N.E. “The Roots of Today’s ‘Women’s Jobs’ and ‘Men’s Jobs’: Using the Index of Dissimilarity Segregation by Gender.” Explorations in Economic History, Vol. 28 (1991), pp. 433-459. Betz, N.E., and G. Hackett. “The Relationship of Career-related Self-efficacy Expectations to Perceived Career Options in College Women and Men.” Journal of Counseling Psychology, Vol. 28 (1981), pp. 399-410. Beuhring, T. “Designing a New Job Evaluation System Based on Employee Input.” Presented at the 94th Annual American Psychological Association Conference, Washington, D.C.: 1986. Beuhring, T. “Incorporating Employee Values in Job Evaluation.” Journal of Social Issues, Vol. 45 (1989), pp. 169-189. Beutell, N.J., and O.C. Brenner. “Sex Differences in Work Values.” Journal of Vocational Behavior, Vol. 28 (1986), pp. 29-41. Bielby, W.T., and J.N. Baron. “Undoing Discrimination: Job Integration and Comparable Worth,” Ingredients for Women’s Employment Policy. Albany: State University of New York Press, 1987. Bigoness, W.J. “Sex Differences in Job Attribute Preferences.” Journal of Organizational Behavior, Vol. 9 (1988), pp. 139-147. Birnbaum, M.H. “Procedures for the Detection and Correction of Salary Inequities,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Birnbaum, M.H. “Perceived Equity of Salary Policies.” Journal of Applied Psychology, Vol. 68 (1983), pp. 49-59. Birnbaum, M.H. “Relationships Among Models of Salary Bias.” American Psychologist, Vol. 40 (1985), pp. 862-866. Blau, F.D. Equal Pay in the Office. Lexington, MA: Lexington Books, 1977. Blau, F.D., and L.M. Kahn. “Race and Sex Differences in Quits by Young Workers.” Industrial and Labor Relations Review, Vol. 34 (1981), pp. 563-577. Blaxall, M., and B. Reagan, eds. Women and the Workplace. Chicago: The University of Chicago Press, 1976. Blinder, A.S., ed. Paying for Productivity. Washington, D.C.: The Brookings Institute, 1990. Blood, M.R., W.K. Graham, and S. Zedeck. “Resolving Compensation Disputes With Three-party Job Evaluation.” Applied Psychology: An International Review, Vol. 36 (1987), pp. 39-50. Blum, L.M. “Possibilities and Limits of the Comparable Worth Movement.” Gender and Society, Vol. 1 (1987), pp. 380-399. Blum, L.M. Between Feminism and Labor: The Significance of the Comparable Worth Movement. Berkeley, CA: University of California Press, 1991. Blumrosen, A.W. “The Group Interest Concept, Employment Discrimination, and Legislative Intent: The Fallacy of Connecticut v. Teal.” Harvard Journal on Legislation, Vol. 20 (1983), pp. 99-135. Blumrosen, R.G. “Wage Discrimination, Job Segregation, and Title VII of the Civil Rights Act of 1964.” Journal of Law Reform, Vol. 12 (1979), pp. 399-502. Blumrosen, R.G. “Wage Discrimination and Job Segregation: The Survival of a Theory.” University of Michigan Journal of Law Reform, Vol. 14 (1980), pp. 1-14. Blumrosen, R.G. “Wage Discrimination, Job Segregation, and Women Workers.” Employee Relations Law Journal, Vol. 6 (1980), pp. 77-136. BNA. “Changing Pay Practices: New Developments in Employee Compensation.” Labor Relations Week, Vol. 2 (1988), pp. 1-220. Boardman, S.K., C.C. Harrington, and S.V. Horowitz. “Successful Women: A Psychological Investigation of Family Class and Education Origins,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Booker, S., and L.C. Nuckolls. “Legal and Economic Aspects of Comparable Worth.” Public Personnel Management, Vol. 15 (1986), pp. 189-206. Borjas, G.J. “Job Mobility and Earnings Over the Life Cycle.” Industrial and Labor Relations Review, Vol. 34 (1981), pp. 365-376. Borjas, G.J. “The Measurement of Race and Gender Wage Differentials: Evidence From the Federal Sector.” Industrial and Labor Relations Review, Vol. 37 (1983), pp. 79-91. Bose, C., and G. Spitze, eds. Ingredients for Women’s Employment Policy. Albany, NY: State University of New York Press, 1987. Bowey, A.M., R. Thorpe, and P. Hellier. Payment Systems and Productivity. New York: St. Martin’s Press, 1986. Brennan, E.J. “Comparable Worth: Employers Can No Longer Pass the Buck.” Personnel Journal, Vol. 64 (1985), pp. 110-111. Brenner, O.C. “Relationship of Education to Sex, Managerial Status, and the Managerial Stereotype.” Journal of Applied Psychology, Vol. 67 (1982), pp. 380-383. Brittingham, B.E., et al. “A Multiple Regression Model for Predicting Men’s and Women’s Salaries in Higher Education,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Bronstein, P., et al., eds. “Stepping on the Academic Career Ladder: How Are Women Doing?” Women’s Career Development. Newbury Park, CA: Sage, 1987. Brooks, L. “Sexist Language in Occupational Information: Does It Make a Difference?” Journal of Vocational Behavior, Vol. 23 (1983), pp. 227-232. Brown, H.P. The Inequality of Pay. Berkeley, CA: University of California Press, 1977. Brown, K.A., and D.L. Huber. “Lowering Floors and Raising Ceilings: A Longitudinal Assessment of the Effects of an Earnings-at-risk Plan on Pay Satisfaction.” Personnel Psychology, Vol. 45 (1992), pp. 279-311. Browne, K.R. “Comparable Worth: An Impermissible Form of Affirmative Action?” Loyola of Los Angeles Law Review, Vol. 22 (1989), pp. 717-759. Browne, M.N., and B. Powers. “Henry George and Comparable Worth: Hypothetical Markets as a Stimulus for Reforming the Labor Market.” American Journal of Economics and Sociology, Vol. 47 (1988), pp. 461-471. Buchele, R., and M. Aldrich. “How Much Difference Would Comparable Worth Make?” Industrial Relations, Vol. 24 (1985), pp. 222-233. Buckley, J.E. “Pay Differences Between Men and Women in the Same Job.” Monthly Labor Review, Vol. 94 (1971), pp. 36-39. Buel, W.D. “A Simplification of Hay’s Method of Recording Paired Comparisons.” Journal of Applied Psychology, Vol. 44 (1960), pp. 347-348. Buford, J.A., and D.R. Norris. “A Salary Equalization Model: Identifying and Correcting Sex-based Salary Differences.” Employee Relations Law Journal, Vol. 6 (1981), pp. 406-421. Bungel, J.H. “To Each According to Her Worth?” The Public Interest, Vol. 67 (1982), pp. 77-93. Callahan-Levy, C.M. “Sex Differences in the Allocation of Pay.” Journal of Personality and Social Psychology, Vol. 37 (1979), pp. 43-446. Camden, C.T., and C.W. Kennedy. “Manager Communicative Style and Nurse Morale.” Human Communication Research, Vol. 12 (1986), pp. 551-563. Campion, M.A., and C.J. Berger. “Conceptual Integration and Empirical Test of Job Design and Compensation Relationships.” Personnel Psychology, Vol. 43 (1990), pp. 525-553. Campion, M.A., R.G. Lord, and E.D. Pursell. “Individual and Organizational Correlates of Promotion Refusal.” Journal of Vocational Behavior, Vol. 19 (1981), pp. 42-49. Cannings, K. “An Interdisciplinary Approach to Analyzing the Managerial Gender Gap.” Human Relations, Vol. 44 (1991), pp. 679-695. Cappelli, P., and W.F. Cascio. “Why Some Jobs Command Wage Premiums: A Test of Career Tournament and Internal Labor Market Hypotheses.” Academy of Management Journal, Vol. 34 (1991), pp. 848-868. Carey, J.F. “Participative Job Evaluation.” Compensation Review, Vol. 9 (1977), pp. 29-38. Carlisi, A.M. “The Influence of Sex Stereotyping and the Sex of the Job Evaluator on Job Evaluation Ratings.” Ph.D. Dissertation, University of Akron, OH: 1984. Carney, T.F. Content Analysis: A Technique for Systematic Inference From Communications. Winnipeg: University of Manitoba Press, 1972. Carter, R.D., et al. “Multivariate Alternatives to Regression Analysis in the Evaluation of Salary Equity-parity.” Research in Higher Education, Vol. 20 (1984), pp. 167-179. Cascio, W.F. Applied Psychology in Personnel Management, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1987. Cellar, D.F., et al. “The Effects of Rater Training, Job Analysis Format and Congruence of Training on Job Evaluation Ratings.” Journal of Business and Psychology, Vol. 3 (1989), pp. 387-401. Cellar, D.F., et al. “The Effect of Field Independence, Job Analysis Format, and Sex of Rater on the Accuracy of Job Evaluation Ratings.” Journal of Applied Social Psychology, Vol. 19 (1989), pp. 363-376. Cellar, D.F., M.C. Kernan, and G.V. Barrett. “Conventional Wisdom and Ratings of Job Characteristics: Can Observers be Objective?” Journal of Management, Vol. 11 (1985), pp. 131-138. Centers, R., and D.E. Bugental. “Intrinsic and Extrinsic Job Motivations Among Different Segments of the Working Population.” Journal of Applied Psychology, Vol. 50 (1966), pp. 193-197. Centra, J.A. Women, Men, and a Doctorate. Princeton, NJ: Educational Testing Service, 1974. Chacko, T.I. “Women and Equal Employment Opportunity: Some Unintended Effects.” Journal of Applied Psychology, Vol. 67 (1982), pp. 119-123. Charles, A.W. “Installing Single-factor Job Evaluation.” Compensation Review, Vol. 3 (1971), pp. 9-21. Chesler, D.J. “Reliability and Comparability of Different Job Evaluation Systems.” Journal of Applied Psychology, Vol. 32 (1948), pp. 465-475. Chesler, D.J. “Reliability of Abbreviated Job Evaluation Scales.” Journal of Applied Psychology, Vol. 32 (1948), pp. 622-628. Chesler, D.J. “Abbreviated Job Evaluation Scales Developed on the Basis of ‘Internal’ and ‘External’ Criteria.” Journal of Applied Psychology, Vol. 33 (1949), pp. 151-157. Chewning, M.F., and W.E. Walker. “Sex-typing of Tasks and Occupations.” Psychological Reports, Vol. 47 (1980), pp. 696-698. Chi, K.S. “Comparable Worth: Implications of the Washington Case.” State Government, Vol. 57 (1984), pp. 34-45. Chiswick, B.R. “Immigrant Earnings Patterns by Sex, Race and Ethnic Groupings.” Monthly Labor Review, Vol. 103 (1980), pp. 22-25. Clauss, C.A. “Comparable Worth—The Theory, Its Legal Foundation, and the Feasibility of Implementation.” University of Michigan Journal of Law Reform, Vol. 20 (1986), pp. 7-97. Cleveland, J.N., and G. Hollman. “The Effects of the Age-type of Tasks and Incumbent Age Composition on Job Perceptions.” Journal of Vocational Behavior, Vol. 36 (1990), pp. 181-194. Coch, J.V. “Student Choice of Undergraduate Major Field of Study and Private Internal Rates of Return.” Industrial and Labor Relations Review, Vol. 26 (1972), pp. 680-685. Cohen, L. “More Reliable Job Evaluation.” Personnel Psychology, Vol. 4 (1948), pp. 457-464. Cohen, M.S. “Sex Differences in Compensation.” The Journal of Human Resources, Vol. 6 (1971), pp. 434-447. Cole, J.R. Fair Science: Women in the Scientific Community. New York: Free Press, 1979. Collett, M.J. “Comparable Worth: An Overview.” Public Personnel Management, Vol. 12 (1983), pp. 325-331. Collins, J.M., and P.M. Muchinsky. “An Assessment of the Construct Validity of Three Job Evaluation Methods: A Field Experiment.” Academy of Management Journal, Vol. 36 (1993), pp. 895-904. Collins, R. “Functional and Conflict Theories of Educational Stratification.” American Sociological Review, Vol 36 (1971), pp. 1002-1019. Comparable Worth: An Analysis and Recommendations. U.S. Commission on Civil Rights. Washington, D.C.: 1985. Comparable Worth: A Symposium on the Issues and Alternatives. Equal Employment Advisory Council. Washington, D.C.: 1981. Comparable Worth for Federal Jobs: A Wrong Turn Off the Road Toward Pay Equity and Women’s Career Advancement. U.S. Office of Personnel Management. Washington D.C.: 1987. Comparable Worth in Montana State Government: A Report to the 52nd Legislature. Montana State, State Personnel Division, Department of Administration. Helena, MT: 1991. Comparable Worth: Issue for the 80s: A Consultation of the U.S. Commission on Civil Rights. U.S. Commission on Civil Rights. Washington, D.C.: 1984. Converse, J.M., and S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire. Beverly Hills, CA: Sage Publications, 1986. Cooper, C.L., and M.J. Davidson. “The High Cost of Stress on Women Managers.” Organizational Dynamics, Vol. 10 (1982), pp. 44-53. Cooper, E.A., and G.V. Barrett. “Equal Pay and Gender: Implications of Court Cases for Personnel Practices.” Academy of Management Review, Vol. 9 (1984), pp. 84-94. Cooper, E.A., D. Doverspike, and G.V. Barrett. “Comparison of Different Methods of Determining the Sex Type of an Occupation.” Psychological Reports, Vol. 57 (1985), pp. 747-750. Cooper, E.A., et al. “Sex Bias in Job Evaluation: The Effect of Sex on Judgments of Factor and Level Weights.” Educational and Psychological Measurement, Vol. 47 (1987), pp. 369-375. Cooper, E.A., and R.W. Scholl. “Reliability of Job Evaluation: Differences Across Sex-typed Jobs.” Journal of Business and Psychology, Vol. 4 (1989), pp. 155-165. Cooper, H.M., J.M. Burger, and T.L. Good. “Gender Differences in the Academic Locus of Control Beliefs of Young Children.” Journal of Personality and Social Psychology, Vol. 40 (1981), pp. 562-572. Cortis, L.E. “Psychological Factors in Job Evaluation.” South African Journal of Psychology, Vol. 2 (1972), pp. 55-66. Craver, G. “Survey of Job Evaluation Practices in State and County Governments.” Public Personnel Management, Vol. 6 (1977), pp. 121-131. Cummings, L.L. “Compensation, Culture, and Motivation.” Organizational Dynamics, Vol. 12 (1984), pp. 33-44. Curry, E.W., and D. Walling. “Occupational Prestige: Exploration of a Theoretical Basis.” Journal of Vocational Behavior, Vol. 25 (1984), pp. 124-138. Dackawich, J., J. Best, and W. York. “Occupational and Educational Expectations: Sex and Ethnic Variations.” Paper, California State University, 1978. Dadoy, M. “Role et Place de l’Analyse du Travail Dans les Systemes d’Evaluation de la Qualification du Travail.” Le Travail Humain, Vol. 54 (1991), pp. 97-112. Daniels, H.W. “Winning Acceptance for the Job Evaluation Plan.” Personnel, Vol. 30 (1953), pp. 30-32. Danielson, J.L., and R. Smith. “The Application of Regression Analysis to Equality and Merit in Personnel Decisions.” Public Personnel Management Journal, Vol. 10 (1981), pp. 126-131. Darlington, R.B. “Multiple Regression in Psychological Research and Practice.” Psychological Bulletin, Vol. 69 (1968), pp. 161-182. Davies, M.W. Women’s Place Is at the Typewriter: Office Work and Office Workers 1870-1930. Philadelphia: Temple University Press, 1982. Davis, J.C., and C.M. Hubbard. “On the Measurement of Discrimination Against Women.” American Journal of Economics and Sociology, Vol. 38 (1979), pp. 287-292. Davis, K.R. and W.I. Sauser. “Effects of Alternative Weighting Methods in a Policy-capturing Approach to Job Evaluation: A Review and Empirical Investigation.” Personnel Psychology, Vol. 44 (1991), pp. 85-127. Davis, K.R., and W.I. Sauser. “A Comparison of Factor Weighting Methods in Job Evaluation: Implications for Compensation Systems.” Public Personnel Management, Vol. 22 (1993), pp. 91-106. Davis, M.K., and J. Tiffin. “Cross Validation of an Abbreviated Point Job Evaluation System.” Journal of Applied Psychology, Vol. 34 (1950), pp. 225-228. Dean, J.W., Jr., and D.J. Brass. “Social Interaction and the Perception of Job Characteristics in an Organization.” Human Relations, Vol. 38 (1985), pp. 571-582. De Corte, W. “Estimating Sex-related Bias in Job Evaluation.” Journal of Occupational and Organizational Psychology, Vol. 66 (1993), pp. 83-96. DeGooyer, J., and L. Westley. Women’s Work: Undervalued, Underpaid. National Commission on Working Women. Washington, D.C.: 1984. Delaney, H.D., R.D. Norman, and D.A. Miller. “An Exploration of the Verbal Encodability Hypothesis for Sex Differences in the Digit-symbol (Symbol-digit) Test.” Intelligence, Vol. 5 (1981), pp. 199-208. DeNisi, A.S., E.T. Cornelius, III., and A.G. Blencoe. “Further Investigation of Common Knowledge Effects on Job Analysis Ratings.” Journal of Applied Psychology, Vol. 72 (1987), pp. 262-268. Deremer, C., D.R. Quaries, and C.M. Temple. “The Success Rate of Personal Salary Negotiations: A Further Investigation of Academic Pay Differentials by Sex.” Research in Higher Education, Vol. 16 (1986), pp. 139-154. Dertien, M.G. “The Accuracy of Job Evaluation Plans.” Personnel Journal, Vol. 60 (1981), pp. 566-570. Desmond, R.E., and D.J. Weiss. “Worker Estimation of Ability Requirements of Their Jobs.” Journal of Vocational Behavior, Vol. 7 (1975), pp. 23-27. De Sola Pool, I., ed. Trends in Content Analysis. Urbana, IL: University of Illinois Press, 1959. DeVader, C.L. “A Comparison of Three Category Types and Their Applicability to I/O Rating.” Ph.D. Dissertation, University of Akron, 1986. Diamond, E.E. “Theories of Career Development and the Reality of Women at Work,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Dick, A.H. “Job Evaluation’s Role in Employee Relations.” Personnel Journal, Vol. 53 (1974), pp. 176-179. Dictionary of Occupational Titles, 4th ed. U.S. Department of Labor. Washington, D.C.: 1991. Dillingham, A.E. “Sex Differences in Labor Market Injury Risk.” Industrial Relations, Vol. 20 (1981), pp. 117-122. Dolton, P.J. “Testing for Sex Discrimination in Maximum Likelihood Models.” Applied Economics, Vol. 16 (1984), pp. 225-235. Dornstein, M. “Perceptions Regarding Standards for Evaluating Pay Equity and Their Determinants.” Journal of Occupational Psychology, Vol. 58 (1985), pp. 321-330. Dornstein, M. “Pay Equity Evaluations of Occupations and Their Bases.” Journal of Applied Social Psychology, Vol. 18 (1988), pp. 905-924. Dornstein, M. “The Fairness Judgments of Received Pay and Their Determinants.” Journal of Occupational Psychology, Vol. 62 (1989), pp. 287-299. Doverspike, D. “A Statistical Analysis of Internal Sex Bias in a Job Evaluation Instrument (Doctoral dissertation, The University of Akron, 1983).” Dissertation Abstracts International, Vol. 43 (1983), p. 3063B. Doverspike, D., and G.V. Barrett. “An Internal Bias Analysis of a Job Evaluation Instrument.” Journal of Applied Psychology, Vol. 69 (1984), pp. 648-662. Doverspike, D., et al. “Generalizability Analysis of a Point-method Job Evaluation Instrument.” Journal of Applied Psychology, Vol. 68 (1983), pp. 476-483. Doverspike, D., et al. “Sex Differences in Short-term Memory Processing.” Perceptual and Motor Skills, Vol. 58 (1984), pp. 135-139. Drazin, R. and E.R. Auster. “Wage Differences Between Men and Women: Performance Appraisal Ratings vs. Salary Allocation as the Locus of Bias.” Human Resource Management, Vol. 26 (1987), pp. 157-168. Driver, R.S. “A Case History in Merit Rating.” Personnel, Vol. 13 (1939), pp. 262-269. Driver, R.S. “The Validity and Reliability of Ratings.” Personnel, Vol. 16 (1940), pp. 137-191. Dunham, R.B. “Relationships of Perceived Job Design Characteristics to Job Ability Requirements and Job Value.” Journal of Applied Psychology, Vol. 62 (1977), pp. 760-763. Dunson, B.H. “Pay, Experience, and Productivity: The Government-Sector Case.” The Journal of Human Resources, Vol. 20 (1985), pp. 153-160. Durio, H.F., and C.A. Kidlow. “The Nonretention of Capable Women Engineering Students.” Research in Higher Education, Vol. 13 (1980), pp. 61-71. Dyer, L., D.P. Schwab, and R.D. Theriault. “Managerial Perceptions Regarding Salary Increase Criteria.” Personnel Psychology, Vol. 29 (1976), pp. 233-242. Egli, C.P. “Judicial Refinement of Statistical Evidence in Title VII Cases.” Connecticut Law Review, Vol. 13 (1981), pp. 515-548. Ehrenberg, R.G., ed. Do Compensation Policies Matter? Ithaca, NY: ILR Press, 1990. Eldred, C. A Report of a Study of Women Training or Working in Outside Plant Craft Jobs in the Bell System. Westate, Inc., Rockville, MD: 1975. Elizur, D. Job Evaluation: A Systematic Approach. Hants, England: Gower, 1980. Elizur, D. Systematic Job Evaluation and Comparable Worth. Brooksville, VA: Gower Publishing Company, 1987. Elizur, D. “Systematic Selection of Job Evaluation Items.” Applied Psychology An International Review, Vol. 36 (1987), pp. 51-59. Elizur, D. “The Scaling Method of Job Evaluation.” Compensation Review, Vol. 10 (1978), pp. 34-46. Elizur, D., and H. Thierry. “Job Evaluation, Comparable Worth, and Compensation: An Introduction.” Applied Psychology: An International Review, Vol. 36 (1987), pp. 3-7. Ellerman, D.A., and E.R. Smith. “Generalized and Individual Bias in the Evaluation of the Work of Women: Sexism in Australia.” Australian Journal of Psychology, Vol. 35 (1983), pp. 71-79. Employment Attributes of Recent Science and Engineering Graduates. National Science Foundation, Publication 80-311. Washington, D.C.: 1980. England, P. Comparable worth: Theories and Evidence. New York: Aldine de Gruyter, 1992. England, P., M. Chassie, and L. McCormack. “Skill Demands and Earnings in Female and Male Occupations.” Sociology and Social Research, Vol. 66 (1982), pp. 147-168. England, P., and D. Dunn. “Evaluating Work and Comparable Worth,” Annual Review of Sociology. Palo Alto, CA: 1988. England, P., et al. “Explaining Occupational Sex Segregation and Wages: Findings From a Model With Fixed Effects.” American Sociological Review, Vol. 53 (1988), pp. 544-558. England, P. and B.S. Kilbourne. “Using Job Evaluation to Achieve Pay Equity.” International Journal of Public Administration, Vol. 14 (1991), pp. 823-843. England, P., and S.D. McLaughlin. “Sex Segregation of Jobs and Male-female Income Differentials,” Discrimination in Organizations: Using Social Indicators to Manage Social Change. San Francisco: Jossey-Bass, 1979. Epperson, L.L. “The Dynamics of Factor Comparison/point Evaluation.” Public Personnel Management, Vol. 4 (1975), pp. 38-48. Equal Employment Opportunity Commission. “The Equal Pay Act: Interpretations.” Federal Register, Vol. 46 (1991), pp. 43848-43852. Ervin, D., B.J. Thomas, and Zey-Ferrell. “Sex Discrimination and Rewards in a Public Comprehensive University.” Human Relations, Vol. 37 (1984), pp. 1005-1025. Evans, S.M., and B.J. Nelson. Wage Justice: Comparable Worth and the Paradox of Technocratic Reform. Chicago: The University of Chicago Press, 1989. Executive Summary of the Classification Modernization Study. State of Ohio, Ohio’s Bureau of Employment Services, Women’s Division. Columbus, OH: 1986. Eyde, L.D. “Evaluating Job Evaluation: Emerging Research Issues for Comparable Worth Analysis.” Public Personnel Management, Vol. 12 (1983), pp. 425-444. Fagerlind, I. Formal Education and Adult Earnings: A Longitudinal Study on the Economic Benefits of Education. Stockholm, Sweden: Almquist and Wiksell International, 1975. Farley, L. Sexual Shakedown: The Sexual Harassment of Women on the Job. New York: Warner Books, 1980. Feldberg, R.L. “Comparable Worth: Toward Theory and Practice in the United States.” Signs: Journal of Women in Culture and Society, Vol. 10 (1984), pp. 311-328. Feldberg, R.L. “Comparable Worth: The Relationship of Method and Politics,” Ingredients for Women’s Employment Policy. Albany: State University of New York Press, 1987. Feldstein, H.R. “Sex-based Wage Discrimination Claims After County of Washington v. Gunther.” Columbia Law Review, Vol. 81 (1981), pp. 1333-1347. Ferber, M.A., and B.G. Burnbaum. “Labor Force Participation Patterns and Earnings of Women Clerical Workers.” The Journal of Human Resources, Vol. 16 (1981), pp. 416-426. Ferber, M.A., and W.W. McMahon. “Women’s Expected Earnings and Their Investment in Higher Education.” The Journal of Human Resources, Vol. 14 (1979), pp. 405-520. Ferraro, G.A. “Bridging the Wage Gap: Pay Equity and Job Evaluations.” American Psychologist, Vol. 39 (1984), pp. 1166-1170. Ferris, G.R., and D.C. Gilmore. “A Methodological Note on Job Complexity Indexes.” Journal of Applied Psychology, Vol. 70 (1985), pp. 225-227. FES, Factor Evaluation System, Position Classification Standards, General Introduction, Background, and Instructions, Section VII, Instruction for the Factor Evaluation System. U.S. Civil Service Commission, Transmittal Sheet 27. Washington, D.C.: 1977. Filer, R.K. “Sexual Differences in Earnings: The Role of Individual Personalities and Tastes.” Human Resources, Vol. 18 (1983), pp. 82-99. Final Report and Recommendations of the Task Force on State Compensation and Classification Equity to the 63rd Legislative Assembly. Task Force on State Compensation and Classification Equity, State of Oregon. Salem, OR: 1985. Final Report: PAQ Gender/race Analyses for Arlington County, VA. Jeanneret and Associates, Inc. Houston, TX: 1986. Findley, S. “Making Sense of Pay Equity: Issues for a Feminist Political Practice,” Just Wages: A Feminist Assessment of Pay Equity. Toronto: University of Toronto Press, 1991. Finkelstein, J.A., and C.H. Hatch. “Job Evaluation: New Technology, New Role for HR Managers.” Personnel, Vol. 64 (1987), pp. 5-10. Fisher, G.D. “Salary Surveys—An Antitrust Prospective.” Personnel Administrator, (1985) pp. 87-97, 154. Fitzpatrick, B. “An Objective Test of Job Evaluation Validity.” Personnel, Vol. 28 (1949), pp. 128-131. Flammang, J.A. “Effective Implementation: The Case of the Comparable Worth in San Jose.” Policy Studies Review, Vol. 5 (1986), pp. 815-837. Flammang, J.A. “Women Made a Difference: Comparable Worth in San Jose,” The Women’s Movements of the United States and Western Europe. Philadelphia: Temple University Press, 1987. Fleishman, E.A. Rating Scale Booklet F-JAS: Fleishman Job Analysis Survey. Palo Alto, CA: Consulting Psychologists Press, 1992. Fleishman, E.A., and M.D. Mumford. “Evaluating Classifications of Job Behavior: A Construct Validation of the Ability Requirement Scales.” Personnel Psychology, Vol. 41 (1991), pp. 523-575. Fleishman, E.A., and M.E. Reilly. Administrator’s Guide: Fleishman Job Analysis Survey. Palo Alto, CA: Consulting Psychologists Press, 1992. Fleishman, E.A., and M.E. Reilly. Handbook of Human Abilities: Definitions, Measurements, and Job Task Requirements. Palo Alto, CA: Consulting Psychologist Press, 1992. Forgionne, G.A., and V.E. Peeters. “Differences in Job Motivation and Satisfaction Among Female and Male Managers.” Human Relations, Vol. 35 (1982), pp. 101-118. Fottler, M.D., and T. Bain. “Managerial Aspirations of High School Seniors: A Comparison of Males and Females.” Journal of Vocational Behavior, Vol. 16 (1980), pp. 83-95. Fox, M.F. “Sex, Salary, and Achievement: Reward Dualism in Academia.” Sociology of Education, Vol. 54 (1981), pp. 71-84. Fox, M.F. “Sex Segregation and Salary Structure in Academics.” Sociology of Work and Occupations, Vol. 8 (1981), pp. 36-60. Frank, M.S. “Position Classification: A State-of-the-Art Review and Analysis.” Public Personnel Management Journal, Vol. 11 (1982), pp. 239-247. Frank, R.H. “Are Workers Paid Their Marginal Products?” The American Economic Review, Vol. 74 (1984), pp. 549-571. Fraser, S.L., S.F. Cronshaw, and R.A. Alexander. “Generalizability Analysis of a Point Method Job Evaluation Instrument: A Field Study.” Journal of Applied Psychology, Vol. 69 (1984), pp. 643-647. Frasher, J.M., R.S. Frasher, and F.B. Wims. “Sex-role Stereotyping in School Superintendents’ Personnel Decisions.” Sex Roles, Vol. 8 (1982), pp. 261-268. Freed, M.G., and D.D. Polsby. “Comparable Worth in the Equal Pay Act.” University of Chicago Law Review, Vol. 51 (1984), pp. 1078-1111. Freedman, S.M. “The Effects of Subordinate Sex, Pay Equity, and Strength of Demand on Compensation Decisions.” Sex Roles, Vol. 5 (1979), pp. 649-658. Freeman, R.B. “Overinvestment in College Training?” The Journal of Human Behavior, Vol. 10 (1975), pp. 287-311. Friedman, L., and R.J. Harvey. “Can Raters With Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?” Personnel Psychology, Vol. 39 (1986), pp. 779-796. Frug, M.J. Women and the Law. Westbury, NY: The Foundation Press, 1992. Fuchs, V.R. Women’s Quest for Economic Equality. Cambridge, MA: Harvard University Press, 1988. Fudge, J., and P. McDermott, eds. Just Wages: A Feminist Assessment of Pay Equity. Toronto: University of Toronto Press, 1991. Fulghum, J.B. “The Employer’s Liabilities Under Comparable Worth.” Personnel Journal, Vol. 62 (1983), pp. 400-412. Fulghum, J.B. “The Newest Balancing Act: A Comparable Worth Study.” Personnel Journal, Vol. 63 (1984), pp. 32-38. Gasaway, L.N. “Comparable Worth: A Post-Gunther Overview.” Georgetown Law Journal, Vol. 69 (1981), pp. 1123-1169. Gaskell, J. “What Counts as Skill? Reflections on Pay Equity,” Just Wages: A Feminist Assessment of Pay Equity. Toronto: University of Toronto Press, 1991. Gaskell, J.S. “Conceptions of Skill and the Work of Women: Some Historical and Political Issues.” The Politics of Diversity: Feminism, Marxism and Nationalism. London: Verso, 1986. Gerbner, G., et al., eds. The Analysis of Communication Content: Developments in Scientific Theories and Computer Techniques. Huntington, NY: Robert E. Krieger Publishing Company, 1978. Gerhart, B. “Gender Differences in Current and Starting Salaries: The Role of Performance, College Major, and Job Title.” Industrial and Labor Relations Review, Vol. 43 (1990), pp. 418-433. Gerhart, B. and N.E. Cheikh. “Earnings and Percentage Female: A Longitudinal Study.” Industrial Relations, Vol. 30 (1991), pp. 62-78. Gerhart, B.A., and G.T. Milklavich. “Salaries, Salary Growth, Promotions of Men and Women in a Large, Private Firm,” Pay Equity: Empirical Inquiries. Washington, D.C.: National Academy Press, 1989. Gethman, B.R. “The Job Market, Sex Bias, and Comparable Worth.” Public Personnel Management, Vol. 16 (1987), pp. 173-180. Ghiselli, E.E., J.P. Campbell, and S. Zedeck. Measurement Theory for the Behavioral Sciences. San Francisco: W.H. Freeman and Company, 1981. Gibson, J.W., and E.P. Prien. “Validation of Minimum Qualifications.” Public Personnel Management, Vol. 6 (1977), pp. 447-451. Giese, S.L., R.A. Alexander, and G.V. Barrett. “Comparison of Students and Professionals as Subjects in Job Evaluation Research.” Presented at the 4th annual conference of the Society for Industrial and Organizational Psychology, Boston: 1989. Giles, B.A., and G.V. Barrett. “Utility of Merit Increases.” Journal of Applied Psychology, Vol. 55 (1971), pp. 103-109. Gilligan, C. “In a Different Voice: Women’s Conceptions of Self and Morality.” Harvard Educational Review, Vol. 47 (1977), pp. 481-517. Goals and Techniques for a Merit Pay System: Information for State and Local Government. U.S. Office of Personnel Management, Office of Intergovernmental Personnel Programs. Washington, D.C.: 1981. Gold, M.V. A Dialogue on Comparable Worth. Ithaca, NY: ILR Press, 1983. Goldin, C. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990. Gollob, H.F. “Detecting Sex Bias in Salaries.” American Psychologist, Vol. 39 (1984), pp. 448-451. Gomberg, W. “A Trade Unionist Looks at Job Evaluation.” Journal of Applied Psychology, Vol. 35 (1951), pp. 1-7. Gomez-Megia, L.R., ed. Compensation and Benefits. Washington, D.C.: Bureau of National Affairs Books, 1989. Gomez-Megia, L.R., R.C. Page, and W.W. Tornow. “A Comparison of the Practical Utility of Traditional, Statistical, and Hybrid Job Evaluation Approaches.” Academy of Management Journal, Vol. 25 (1982), pp. 790-809. Gordon, M.E., and W.J. Fitzgibbons. “Empirical Test of the Validity of Seniority as a Factor in Staffing Decisions.” Journal of Applied Psychology, Vol. 67 (1982), pp. 311-319. Government of Canada Position Information Questionnaire. Canadian Human Rights Commission, Equal Pay for Work of Equal Value Study Joint Union/management Initiative. Ottawa: 1987. Graddick, M.M., and J.L. Farr. “Professionals in Scientific Disciplines: Sex-related Differences in Working Life Commitments.” Journal of Applied Psychology, Vol. 68 (1983), pp. 641-645. Graham, M., and A.C. Hyde. “Comparable Worth in the United States: Legal and Administrative Developments in the 1980s.” International Journal of Public Administration, Vol. 14 (1991), pp. 799-821. Graham-Moore, B., and T.L. Roos. Gainsharing: Plans for Improving Performance. Washington, D.C.: The Bureau of National Affairs, 1990. Grams, R., and D.P. Schwab. “An Investigation of Systematic Gender-related Error in Job Evaluation.” Academy of Management Journal, Vol. 28 (1985), pp. 279-290. Grant, D.L. “An Analysis of a Point Rating Job Evaluation Plan.” Journal of Applied Psychology, Vol. 35 (1951), pp. 236-240. Gray, J.S. “Custom Made Systems of Job Evaluations.” Journal of Applied Psychology, Vol. 34 (1950), pp. 378-380. Gray, J.S. “Adjusting Base Weights in Job Evaluation.” Journal of Applied Psychology, Vol. 35 (1951), pp. 8-10. Gray, J.S., and M.C. Jones. “Ready Made Versus Custom Made Systems of Job Evaluation.” Journal of Applied Psychology, Vol. 35 (1951), pp. 11-14. Gray, M.W. “Legal Perspectives on Sex Equity in Faculty Employment.” Journal of Social Issues, Vol. 41 (1985), pp. 121-134. Gray, M.W., and W.L. Scott. “A ‘Statistical’ Remedy for Statistically Identified Discrimination.” Academe, Vol. 66 (1980), pp. 174-181. Green, S.B., and T. Stutzman. “An Evaluation of Methods to Select Respondents for Structured Job-analysis Questionnaires.” Personnel Psychology, Vol. 39 (1986), pp. 543-564. Greenberger, E., and L.D. Steinberg. “Sex Differences in Early Labor Force Experience: Harbinger of Things to Come.” Social Forces, Vol. 62 (1983), pp. 467-486. Greig, J.J., P.F. Orazem, and J.P. Mattila. “Measurement Error in Comparable Worth Pay Analysis: Causes, Consequences, and Corrections.” Journal of Social Issues, Vol. 45 (1989), pp. 135-151. Grider, D., and L.A. Toombs. “Disproving Valuation Discrimination: A Study of Evaluator Gender Bias.” ACA Journal, Vol. 2 (1993), pp. 24-33. Gronau, R. “Sex-related Wage Differentials and Women’s Interrupted Labor Careers—The Chicken or the Egg.” Journal of Labor Economics, Vol. 6 (1988), pp. 277-301. Gross, A.L., J. Faggen, and K. McCarthy. “The Differential Predictability of the College Performance of Males and Females.” Educational and Psychological Measurement, Vol. 34 (1974), pp. 363-365. Grubb, W.N., and R.H. Wilson. “Sources of Increasing Inequality in Wages and Salaries, 1960-1980.” Monthly Labor Review, Vol. 112 (1989), pp. 3-13. Grune, J.A., ed. Manual on Pay Equity: Raising Wages for Women’s Work. Washington, D.C.: Committee on Pay Equity, 1979. Grune, J.A., and N. Reder. “Pay Equity: An Innovative Public Policy Approach to Eliminating Sex-based Wage Discrimination.” Public Personnel Management Journal, Vol. 12 (1984), pp. 70-80. Grune, J.A., and N. Reder. “Addendum—Pay Equity: An Innovative Public Policy Approach to Eliminating Sex-based Wage Discrimination.” Public Personnel Management Journal, Vol. 13 (1984), pp. 70-80. Grusky, O. “Career Mobility and Organizational Commitment.” Administrative Science Quarterly, Vol. 10 (1966), pp. 488-502. Guion, R.H. “A Parametric Study of Comparable Worth: Social Judgment Theory and Latent Trait Theory Applied to Job Evaluation.” Abstract, 1981. Gunderson, M. “Probit and Logit Estimates of Labor Force Participation.” Industrial Relations, Vol. 19 (1980), pp. 216-220. Gupta, N., and G.D. Jenkins, Jr. “Practical Problems in Using Job Evaluation Systems to Determine Compensation.” Human Resource Management Review, Vol. 1 (1991), pp. 133-144. Gutek, B.A., and L. Larwood. “Introduction: Women’s Careers Are Important and Different,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Haberfeld, Y. “Employment Discrimination: An Organizational Model.” Academy of Management Journal, Vol. 35 (1992), pp. 161-180. Hahn, D.C., and R.L. Dipboye. “Effects of Training and Information on the Accuracy and Reliability of Job Evaluations.” Journal of Applied Psychology, Vol. 73 (1988), pp. 146-153. Haignere, L.V., C.A. Chertos, and R.J. Steinberg. Managerial Promotion in the Public Sector: The Importance of Eligibility Requirements on Women and Minorities. Albany, NY Center for Women in Government, State University of New York. Albany: 1981. Hall, J.K., and P.E. Spector. “Relationships of Work Stress Measures for Employees with the Same Job.” Work and Stress, Vol. 5 (1991), pp. 29-35. Hall, R., and G.V. Barrett. “Payday for Patients: Federal Guidelines or a Job-sample Approach?” American Psychologist, Vol. 46 (1977), pp. 586-588. Hallard, A.H., and H.G. Schultz. “A Factor Analysis of a Salary Job Evaluation Plan.” Journal of Applied Psychology, Vol. 35 (1952), pp. 243-246. Handbook for Analyzing Jobs. U.S. Department of Labor. Washington, D.C.: 1972. Handbook of Occupational Groups and Series. U.S. Office of Personnel Management, Washington, D.C.: 1990. Hansen, R.D., and V.E. O’Leary. “Sex-Determined Attributions,” Women, Gender, and Social Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, 1985. Harding, F.D., J.M. Madden, and K. Colson. “Analysis of a Job Evaluation System.” Journal of Applied Psychology, Vol. 44 (1960), pp. 354-357. Harding, S. The Science Question in Feminism. Ithica, NY: Cornell University Press, 1986. Harrel, M.S., et al. “Predicting Compensation Among MBA Graduates Five and Ten Years After Graduation.” Journal of Applied Psychology, Vol. 62 (1971), pp. 636-640. Hartenian, L.S., and N.B. Johnson. “Establishing the Reliability and Validity of Wage Surveys.” Public Personnel Management, Vol. 20 (1991), pp. 367- 383. Hartman, A. “Comparable Worth.” Harvard Women’s Law Journal, Vol. 6 (1983), pp. 201-218. Hartmann, H.I. “Capitalism, Patriarchy, and Job Segregation by Sex,” Women and the Workplace. Chicago: University of Chicago Press, 1976. Hartmann, H.I. “The Family as the Locus of Gender, Class, and Political Struggle: The Example of Housework.” Journal of Women in Culture and Society, Vol. 6 (1981), pp. 366-394. Hartmann, H.I. “Comparable Worth and Women’s Economic Independence,” Ingredients for Women’s Employment Policy. Albany: State University of New York Press, 1987. Hartmann, H.I., ed. Comparable Worth: New Directions for Research. Washington, D.C.: National Academy Press, 1985. Hartmann, H.I., ed. Computer Chips and Paper Clips: Technology and Women’s Employment Case Studies and Policy Perspectives, Vol. II. Washington, D.C.: National Academy Press, 1986. Hartmann, H.I. “Internal Labor Markets and Gender: A Case Study of Promotions,” Gender in the Workplace. Washington, D.C.: The Brookings Institution, 1987. Hartmann, H.I. “Capitalism, Patriarchy, and Job Segregation by Sex,” Women, Class, and the Feminist Imagination: A Socialist-Feminist Reader. Philadelphia: Temple University Press, 1990. Hartmann, H.I., R.E. Krant, and L.A. Tilly, eds. Computer Chips and Paper Clips: Technology and Women’s Employment, Vol. I. Washington, D.C.: National Academy Press, 1986. Hartmann, H.I., and D.M. Pearce. High Skill and Low Pay: The Economics of Child Care at Work. Institute for Women’s Policy Research. Washington, D.C.: 1989. Hartmann, H., and R. Spalter-Roth. Women in Telecommunications: An Acception to the Rule. Institute for Women’s Policy Research. Washington, D.C.: 1989. Hartog, J. “On the Multicapability Theory of Income Distribution.” European Economic Review, Vol. 19 (1977), pp. 157-171. Hartog, J. “Earnings and Capability Requirements.” Review of Economics and Statistics, Vol. 62 (1980), pp. 230-240. Harvey, R.J. “Job Analysis.” Handbook of Industrial and Organizational Psychology, 2nd ed., Vol. II. Palo Alto, CA: Consulting Psychologists Press, Inc., 1991. Harvey, R.J., et al. “Dimensionality of the Job Element Inventory, a Simplified Worker-oriented Job Analysis Questionnaire.” Journal of Applied Psychology, Vol. 73 (1988), pp. 639-646. Hay, E.N. “Planning for Fair Salaries and Wages.” Personnel Journal, Vol. 18 (1939), pp. 141-150. Hay, E.N. “Job Evaluation—A Discussion.” Personnel Journal, Vol. 28 (1949), pp. 262-266. Hay, E.N. “Techniques of Securing Agreement in Job Evaluation Committees.” Personnel, Vol. 26 (1950), pp. 307-312. Hay, E.N. “The Application of Weber’s Law to Job Evaluation Estimates.” Journal of Applied Psychology, Vol. 34 (1950), pp. 102-104. Hay, E.N. “The Attitude of the American Federation of Labor on Job Evaluation.” Personnel Journal, Vol. 26, pp. 163-169. Hay, E.N. “What Kind of Job Evaluation?—A Reply.” Public Personnel Review, Vol. 14 (1953), pp. 123-127. Hay, E.N., and D. Purves. “The Profile Method of High-level Job Evaluation.” Personnel, Vol. 28 (1951), pp. 162-170. Hay, E.N., and D. Purves. “A New Method of Job Evaluation: A Guide Chart-profile Method.” Personnel, Vol. 31 (1954), pp. 72-80. Hayes, R. “Gender Nontraditional or Sex Atypical or Gender Dominant or ... Research: Are We Measuring the Same Thing?” Journal of Vocational Behavior, Vol. 29 (1986), pp. 79-88. Hazel, J.T. “Reliability of Job Ratings as a Function of Time Spent on Evaluation.” The Journal of Industrial Psychology, Vol. 4 (1966), pp. 16-19. Hazel, J.T., J.M. Madden, and E.E. Christal. “Agreement Between Worker-supervisor Descriptions of the Worker’s Job.” Journal of Industrial Psychology, Vol. 9 (1964), pp. 71-79. Hegtvedt, K.A. “Fairness Conceptualizations and Comparable Worth.” Journal of Social Issues, Vol. 45 (1989), pp. 81-97. Heilman, M.E. “High School Students Occupational Interest as a Function of Projected Sex Ratios in Male Dominated Occupations.” Journal of Applied Psychology, Vol. 64 (1979), pp. 275-279. Henderson, R. Compensation Management, 5th ed. Englewood Cliffs, NJ: Prentice Hall, 1989. Henderson, R.I., and K.L. Clarke. “Job Pay for Job Worth: Designing and Managing an Equitable Job Classification and Pay System.” Research Monograph 86, College of Business Administration. Atlanta: Georgia State University Business Publishing Division, 1981. Hennig, M., and A. Jardim. The Managerial Woman. New York: Simon and Schuster, 1977. Hermkens, P., and P. Van Wijngaarden. “Job Evaluation and Justification Criteria for Income Differentials.” Applied Psychology: An International Review, Vol. 36 (1987), pp. 109-117. Hersch, J. “Male-female Differences in Hourly Wages: The Role of Human Capital, Working Conditions and Housework.” Industrial and Labor Relations Review, Vol. 44 (1991), pp. 746-759. Hewitt, B.M., and R.D. Goldman. “Occam’s Razor Slices Through the Myth That College Women Overachieve.” Journal of Educational Psychology, Vol. 67 (1975), pp. 325-330. Hill, M.A., and M.R. Killingsworth, eds. Comparable Worth: Analysis and Evidence. Ithaca, NY: ILR Press, 1989. Hitt, M.A., and S.H. Barr. “Managerial Selection Decision Models: The Examination of Configural Cue Processing.” Journal of Applied Psychology, Vol. 74 (1989), pp. 53-61. Hoffmann, C.C., and K.P. Hoffmann. “Does Comparable Worth Obscure the Real Issues?” Personnel Journal, Vol. 66 (1987), pp. 83-95. Hoffmann, C., and S.S. Reed. “Sex Discrimination?—The XYZ Affair.” The Public Interest, Vol. 62 (1981), pp. 21-39. Hogan, J.C., et al. “Reliability and Validity of Methods for Evaluating Perceived Physical Effort.” Journal of Applied Psychology, Vol. 65 (1980), pp. 672-679. Hollenbeck, J.R., et al. “Sex Differences in Occupational Choice, Pay, and Worth: A Supply-side Approach to Understanding the Male-female Wage Gap.” Personnel Psychology, Vol. 40 (1987), pp. 715-743. Holsti, O.R. Content Analysis for the Social Sciences and Humanities. Reading, MA: Addison-Wesley Publishing Company, 1969. Hornsby, J.S., P.G. Benson, and B.N. Smith. “An Investigation of Gender Bias in the Job Evaluation Process.” Journal of Business and Psychology, Vol. 2 (1987), pp. 150-159. Horrigan, J., and A. Harriman. “Comparable Worth: Public Sector Unions and Employers Provide a Model for Implementing Pay Equity.” Labor Law Journal, Vol. 39 (1988), pp. 704-711. How to Write Position Descriptions. U.S. Civil Service Commission. Washington D.C.: 1978. Huber, V.L. “Comparison of Supervisor-incumbent and Female-male Multidimensional Job Evaluation Ratings.” The Journal of Applied Psychology, Vol. 76 (1991), pp. 115-121. Hunt, T. “Futurism and Futurists in Personnel.” Public Personnel Management, Vol. 13 (1984), pp. 511-520. Hunter, L. “A Method for Monitoring University Faculty Salary Policies for Sex Bias,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Huseman, R.C., J.D. Hatfield, and E.W. Miles. “Test for Individual Perceptions of Job Equity: Some Preliminary Findings.” Perceptual and Motor Skills, Vol. 61 (1985), pp. 1055-1064. Hutner, F.C. Equal Pay for Comparable Worth: The Working Woman’s Issue of the Eighties. New York: Praeger, 1986. Instructions for the Factor Evaluation System. U.S. Civil Service Commission. Washington, D.C.: 1977. Interpretative Bulletin of Code of Federal Regulations, Equal Pay for Equal Work, Under the Fair Labor Standards Act, Title 29 Part 800. U.S. Department of Labor, Employment Standards Administration, Wage and Hour Division. Washington, D.C.: 1971. Introductory Guide for Use With the Position Analysis Questionnaire. PAQ Services, Inc. Logan, UT: 1991. Ivancevich, J.M., and S.M. Smith. “Job Difficulty as Interpreted by Incumbents: A Study of Nurses and Engineers.” Human Relations, Vol. 35 (1982), pp. 391-412. Jackson, L.A. “Relative Deprivation and the Gender Wage Gap.” Journal of Social Issues, Vol. 45 (1989), pp. 117-133. Jackson, L.A., and S.V. Grabski. “Perceptions of Fair Pay and the Gender Wage Gap.” Journal of Applied Social Psychology, Vol. 18 (1988), pp. 606-625. Jacobs, J.A. Revolving Doors: Sex Segregation and Women’s Careers. Stanford, CA: Stanford University Press, 1989. Jacques, E. Time-span Handbook. London: Heinemann, 1971. Jacques, E. “Taking Time Seriously in Evaluating Jobs.” Harvard Business Review, Vol. 57 (1979), pp. 124-132. Jagacinski, C.M., W.K. LeBold, and K.W. Linden. “The Relative Career Advancement of Men and Women Engineers in the United States.” Work and Stress, Vol. 1 (1987), pp. 235-247. James, L.R. “Aggregation Bias in Estimates of Perceptual Agreement.” Journal of Applied Psychology, Vol. 67 (1982), pp. 219-229. Jaussaud, D.P. “Can Job Evaluation Systems Help Determine the Comparable Worth of Male and Female Occupations?” Journal of Economic Issues, Vol. 18 (1984), pp. 473-482. Jeanneret, P.R. “Equitable Job Evaluation and Classification With the Position Analysis Questionnaire.” Compensation Review, Vol. 12 (1980), pp. 32-47. Jeanneret, P.R. “Affidavit of Paul Richard Jeanneret.” Houston, TX: 1983. Job Evaluation: A Practical Guide. British Institute of Management. Southampton: 1961. Job Evaluation Handbook. State of Iowa. Des Moines. Job Evaluation. International Labour Organization. Geneva, Switzerland: 1986. Johnson, N.B. and R.A. Ash. “Integrating the Labor Market With Job Evaluation: Clearing the Cobwebs.” Presented at the 1st Annual Society for Industrial and Organizational Psychology Conference. Chicago: 1986. Jones, M.B., C.A. Braddick, and P.M. Shafer. “Will Broadband Replace Traditional Salary Structures?” Journal of Compensation and Benefits, Vol. 7 (1991), pp. 30-35. Kahn, A., R.E. Nelson, and W.P. Gaeddert. “Sex of Subject and Sex Composition of the Group as Determinants of Reward Allocations.” Journal of Personality and Social Psychology, Vol. 38 (1980), pp. 737-750. Kahn, A., et al. “Equity and Equality: Male and Female Means to a Just End.” Basic and Applied Social Psychology, Vol. 1 (1980), pp. 173-197. Kahn, A.S., and W.P. Gaeddert. “From Theories of Equity to Theories of Justice: The Liberating Consequences of Studying Women,” Women, Gender, and Social Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, 1985. Kalin, R., and D.C. Hodgins. “Sex Bias in Judgements of Occupational Suitability.” Canadian Journal of Behavior Science, Vol. 16 (1984), pp. 311-325. Kandel, T. What Women Earn. New York: The Linden Press/Simon and Schuster, 1981. Kandel, W.L. “Current Developments in EEO: Pregnancy Discrimination in Context.” Employee Relations Law Journal, Vol. 5 (1979), pp. 258-268. Kanter, R.M. Men and Women of the Corporation. New York: Basic Books, 1977. Kanter, R.M. “The Impact of Hierarchical Structures on the Work Behavior of Women and Men.” Social Problems, Vol. 23 (1978), pp. 415-430. Kanter, R.M. “From Status to Contribution: Some Organizational Implications of the Changing Basis for Pay.” Personnel, Vol. 64 (1987), pp. 12-37. Katzell, M.E., and W.C. Byham, eds. Women in the Work Force: Confrontation With Change. New York: Behavioral Publication Inc., 1972. Kauggman, B.E., ed. How Labor Markets Work. Lexington, MA: Lexington Books, 1988. Kaye, D. “Statistical Evidence of Discrimination.” Journal of the American Statistical Association, Vol. 77 (1982), pp. 773-792. Keith, P.M. “Sex, Occupation, Year of Graduation and Perceptions of Job Factors.” Journal of Employment Counseling, Vol. 15 (1980), pp. 180-186. Kelley, H.H. “The Processes of Causal Attribution.” American Psychologist, Vol. 28 (1973), pp. 107-128. Kelly, R.M., and J. Bayes, eds. Comparable Worth, Pay Equity, and Public Policy. New York: Greenwood Press, 1989. Kerr, C., and L.H. Fisher. “Effect of Environment and Administration on Job Evaluation.” Harvard Business Review, Vol. 28 (1950), pp. 77-96. Kessler-Harris, A. “The Debate Over Equality for Women in the Work Place: Recognizing Differences,” Women and Work 1: An Annual Review. Beverly Hills, CA: Sage Publications, Inc., 1985. Kessler-Harris, A. “The Just Price, the Free Market, and the Value of Women.” Feminist Studies, Vol. 14 (1988), pp. 235-250. Kessler-Harris, A. A Women’s Wage: Historical Meanings and Social Consequences. Lexington, KY: The University Press of Kentucky, 1990. Kiesler, S., and T. Finholt. “The Mystery of RSI.” American Psychologist, Vol. 43 (1988), pp. 1004-1015. Kilberg, W.J. “The Earnings Gap and Comparable Worth.” Employee Relations Law Journal, Vol. 102 (1985), pp. 579-583. Killingsworth, M.R. “Comparable Worth in the Job Market: Estimating Its Effects.” Monthly Labor Review, Vol. 108 (1985), pp. 39-41. Killingsworth, M.R. The Economics of Comparable Worth. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1990. Knapp, L., and R.R. Knapp. “Clustered Occupational Interest Measurement Based on Sex-balanced Inventory Items.” Journal of Educational Measurement, Vol. 19 (1982), pp. 75-81. Knepper, J.K. Attitudes Towards Wage-setting in the Private Sector: A Case Study. The National Commission on Working Women. Washington, D.C. Komarovsky, M. “Female Freshman View Their Future: Career Salience and Its Correlates.” Sex Roles, Vol. 8 (1982), pp. 299-314. Kovach, K.A. “Implicit Stereotyping in Personnel Decisions.” Personnel Journal, Vol. 60 (1981), pp. 716-722. Kozlowski, S.W.J., and J.K. Ford. “Rater Information Acquisition Processes: Tracing the Effects of Prior Knowledge, Performance Level, Search Constraint, and Memory Demand.” Organizational Behavior and Human Decision Processes, Vol. 49 (1991), pp. 282-301. Krefting, L.A., P.K. Berger, and M.J. Wallace. “The Contribution of Sex Distribution, Job Content, and Occupational Classification to Job Sextyping: Two Studies.” Journal of Vocational Behavior, Vol. 13 (1978), pp. 181-191. Kress, A.L. “Sound Wage Payment Policies,” The AMA Handbook of Wage and Salary Administration: Tested Compensation Methods for Factory, Office, and Managerial Personnel. New York: American Management Association, 1950. Krippendorff, K. Content Analysis: An Introduction to Its Methodology. Newbury Park, CA: Sage, 1980. Kurtz, M., and E.C. Hocking. “Nurses v. Tree-trimmers.” Public Personnel Management Journal, Vol. 12 (1983), pp. 369-381. Lacey, N. “Legislation Against Sex Discrimination: Questions From a Feminist Perspective.” Journal of Law and Society, Vol. 14 (1987), pp. 411-421. Laking, J., and R. Roark. Retailing Job Analysis and Job Evaluation. New York: National Retail Merchants Association, 1975. Langstroth, L. “Job Evaluation Discussion.” Personnel Journal, Vol. 29 (1950), pp. 180-182. Langwell, K.M. “Real Returns to Career Decisions: The Physicians Specialty and Location Choices.” The Journal of Human Resources, Vol. 15 (1980), pp. 278-86. Lanham, E. “Job Evaluation in Municipalities.” Public Personnel Review, Vol. 14 (1953), pp. 26-30. Lanham, E. “Policies and Practices in Job Evaluation: A Survey.” Personnel, Vol. 29 (1953), pp. 492-498. Lanham, E. Job Evaluation. New York: McGraw-Hill Book Co, 1955. Larwood, L., and U.E. Gattiker. “A Comparison of the Career Paths Used by Successful Women and Men,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Larwood, L., A.H. Stromberg, and B.A. Gutek, eds. Women and Work 1: An Annual Review. Beverly Hills, CA: Sage Publications, Inc., 1985. Lawler, E.E., III. Pay and Organizational Effectiveness: A Psychological View. New York: McGraw-Hill, 1971. Lawler, E.E., III. Pay and Organization Development. Reading, MA: Addison-Wesley Publishing Company, 1981. Lawler, E.E., III. “What’s Wrong With Point-factor Job Evaluation.” Personnel, Vol. 64 (1987), pp. 38-44. Lawler, E.E., III. Strategic Pay: Aligning Organizational Strategies and Pay Systems. San Francisco: Jossey-Bass, 1990. Lawler, E.E., III, and J.R. Hackman. “Impact of Employee Participation in the Development of Pay Incentive Plans.” Journal of Applied Psychology, Vol. 53 (1969), pp. 467-471. Lawshe, C.H. “Studies in Job Evaluation: The Adequacy of Abbreviated Point Ratings for Hourly-paid Jobs in Three Industrial Plants.” Journal of Applied Psychology, Vol. 29 (1945), pp. 177-184. Lawshe, C.H. “Toward Simplified Job Evaluation.” Personnel, Vol. 22 (1945), pp. 153-160. Lawshe, C.H. “The Reliability of Two Job Evaluation Systems.” American Psychologist, Vol. 2 (1947), pp. 339. Lawshe, C.H., and S.L. Alessi. “Studies in Job Evaluation: Analysis of Another Point Rating Scale for Hourly-paid Jobs and the Adequacy of an Abbreviated Scale.” Journal of Applied Psychology, Vol. 30 (1946), pp. 310-319. Lawshe, C.H., and P.C. Fabro. “Studies in Job Evaluation: The Reliability of an Abbreviated Job Evaluation System.” Journal of Applied Psychology, Vol. 43 (1949), pp. 158-166. Lawshe, C.H., and A.A. Maleski. “Studies in Job Evaluation: An Analysis of Point Ratings for Salary Paid Jobs in an Industrial Plant.” Journal of Applied Psychology, Vol. 30 (1946), pp. 117-128. Lawshe, C.H., and E.J. McCormick. “What Do You Buy With the Wage or Salary Dollar?” Personnel, Vol. 24 (1947), pp. 102-106. Lawshe, C.H., and G.A. Satter. “Studies in Job Evaluation: Factor Analyses of Point Ratings for Hourly-paid Jobs in Three Industrial plants.” Journal of Applied Psychology, Vol. 28 (1944), pp. 189-198. Lawshe, C.H., and R.F. Wilson “Studies in Job Evaluation: An Analysis of the Factor Comparison Systems as It Functions in a Paper Mill.” Journal of Applied Psychology, Vol. 30 (1946), pp. 426-434. Lawshe, C.H., and R. Wilson. “Studies in Job Evaluation: The Reliability of Two Point Rating Systems.” Journal of Applied Psychology, Vol. 31 (1947), pp. 355-365. Lee, B.A., D.W. Leslie, and S.G. Olswang. “Implications of Comparable Worth for Academe.” Journal of Higher Education, Vol. 58 (1987), pp. 609-628. Lee, Y.S. “Shaping Judicial Response to Gender Discrimination in Employment Compensation.” Public Administration Review, Vol. 49 (1989), pp. 420-430. Leonard, J.S. “Executive Pay and Firm Performance,” Do Compensation Policies Matter? Ithaca, NY: ILR Press, 1990. Lester, R.A. Reasoning About Discrimination: The Analysis of Professional and Executive Work in Federal Antibias Programs. Princeton, NJ: Princeton University Press, 1980. Leuptow, L.B. “Sex-typing and Change in the Occupational Choices of High School Seniors: 1964-1975.” Sociology of Education, Vol. 54 (1981), pp. 16-24. Levin, M. “Comparable Worth: The Feminist Road to Socialism.” Commentary, (1983), pp. 13-19. Levine, E.L., M. Bennett, and R.A. Ash. “Evaluation and Use of Four Job Analysis Methods for Personnel Selection.” Public Personnel Management, Vol. 8 (1979), pp. 146-151. Lewis, C.T. “Assessing the Validity of Job Evaluation.” Public Personnel Management, Vol. 18 (1989), pp. 45-63. Lewis, C.T., and C.K. Stevens. “An Analysis of Job Evaluation Committee and Job Holder Gender Effects on Job Evaluation.” Public Personnel Management, Vol. 19 (1990), pp. 271-278. Lewis, G. “Sexual Segregation of Occupations and Earnings Differential in Federal Employment.” Public Administration Quarterly, Vol. 9 (1985), pp. 274-290. Lewis, G.B. “Clerical Work and Women’s Earnings in the Federal Civil Service,” Comparable Worth, Pay Equity, and Public Policy. New York: Greenwood Press, 1988. Lindsay, C.M., and M.T. Maloney. “A Model and Some Evidence Concerning the Influence of Discrimination on Wages.” Economic Inquiry, Vol. 26 (1988), pp. 645-660. Linn, R.L. “Fair Test Use in Selection.” Review of Educational Research, Vol. 43 (1973), pp. 139-161. Livernash, E.R., ed. Comparable Worth: Issues and Alternatives. Equal Employment Advisory Council. Washington, D.C.: 1980. Livernash, E.R., ed. Comparable Worth: Issues and Alternatives, 2nd ed. rev. Equal Employment Advisory Council. Washington, D.C.: 1984. Livy, B. Job Evaluation: A Critical Review. New York: John Wiley and Sons, 1975. Lloyd, C.B., and B.T. Niemi. The Economics of Sex Differentials. New York: Columbia University Press, 1979. Lo Bosco, M. “Job Analysis, Job Evaluation, and Job Classification.” Personnel, Vol. 62 (1985), pp. 70-74. Locke, N. “Few Factors or Many?—An Analysis of a Point System of Classification.” Personnel, Vol. 25 (1949), pp. 442-448. Loeb, J.W., M.A. Ferber, and H.M. Lowry. “The Effectiveness of Affirmative Action for Women.” Journal of Higher Education, Vol. 49 (1978), pp. 218-230. London, M. “Employee Perceptions of the Job Reclassification Process.” Personnel Psychology, Vol. 29 (1976), pp. 67-77. Long, J.V. “The Idiosyncratic Determiners of Salary Differences,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Lorber, L.Z., et al. Sex and Salary: A Legal and Personnel Analysis of Comparable Worth. The American Society for Personnel Administration. Alexandria, VA: 1985. Lott, M.R. Wage Scales and Job Evaluation: Scientific Determination of Wage Rates on the Basis of Services Rendered. New York: Ronald Press, 1926. Lowe, R.H., and M.A. Wittig. “Comparable Worth: Individual, Interpersonal, and Structural Considerations.” Journal of Social Issues, Vol. 45 (1989), pp. 223-246. Lumpton, T., ed. Payment Systems: Selected Reading. Baltimore, MD: Penguin Books, 1972. Lunneborg, P.W. “Service vs. Technical Interest—Biggest Sex Difference of All?” Vocational Guidance Quarterly, Vol. 28 (1979), pp. 146-153. Lutes, D., and N. Rothchild. “Compensation: Pay Equity Loses to Chicken Little and Other Excuses.” Personnel Journal, Vol. 65 (1986), pp. 124-130. Lutz, L.D. “What Kind of Job Evaluation?” Public Personnel Review, Vol. 14 (1953), pp. 119-122. Lynn, N.B., and R.E. Vaden. “Toward a Non-sexist Personnel Opportunity Structure: The Federal Executive Bureaucracy.” Public Personnel Management, Vol. 8 (1979), pp. 209-215. Lytle, C.W. Job Evaluation Methods. New York: The Ronald Press, 1946. Maccoby, E.E. and C.N. Jacklin. The Psychology of Sex Differences. Stanford, CA: Stanford University Press, 1974. MacDonald, N.E., and J.S. Hyde. “Fear of Success, Need Achievement, and Fear of Failure: A Factor Analytic Study.” Sex Roles, Vol. 6 (1980), pp. 695-711. Madden, J.M. The Methods and Foundations of Job Evaluation in the United States Air Force. Lackland Air Force Base, Personnel Laboratory, Aeronautical System Division, Air Force System Command, Technical Report ASD-TR-61-100. 1961. Madden, J.M. “The Effect of Varying the Degree of Rater Familiarity in Job Evaluation.” Personnel Administration, Vol. 25 (1962), pp. 42-46. Madden, J.M. “A Further Note on the Familiarity Effect in Job Evaluation.” Personnel Administration, Vol. 26 (1963), pp. 52-53. Madden, J.M. “Policy-capturing Model for Analyzing Individual and Group Judgement in Job Evaluation.” Journal of Industrial Psychology, Vol. 2 (1964), pp. 36-42. Madden, J.M., and R.D. Bourdon. “Effects of Variations in Rating Scale Formats on Judgment.” Journal of Applied Psychology, Vol. 48 (1964), pp. 147-151. Madigan, R.M. “Comparable Worth Judgments: A Measurement Properties Analysis.” Journal of Applied Psychology, Vol. 70 (1985), pp. 137-147. Madigan, R.M., and F.S. Hills. “Job Evaluation and Pay Equity.” Public Personnel Management, Vol. 17 (1988), pp. 323-330. Madigan, R.M., and D.J. Hoover. “Effects of Alternative Job Evaluation Methods on Decisions Involving Pay Equity.” Academy of Management Journal, Vol. 29 (1986), pp. 84-100. Maggio, R. The Dictionary of Bias-free Usage: A Guide to Nondiscriminatory Language. Phoenix: Oryx Press, 1991. Mahoney, T.A., ed. Compensation and Reward Perspectives. Homewood, IL: Richard D. Irwin, Inc., 1979. Mahoney, T.A. “Organizational Hierarchy and Position Worth.” Academy of Management Journal, Vol. 22 (1979), pp. 726-737. Mahoney, T.A. “Approaches to the Definition of Comparable Worth.” Academy of Management Review, Vol. 8 (1983), pp. 14-22. Mahoney, T.A. “Understanding Comparable Worth: A Societal and Political Perspective.” Research in Organizational Behavior, Vol. 9 (1987), pp. 209-245. Mahoney, T.A. “The Symbolic Meaning of Pay Contingencies.” Human Resource Management Review, Vol. 1 (1991), pp. 179-192. Mahoney, T.A., and R.H. Blake. “Judgements of Appropriate Occupational Pay as Influenced by Occupational Characteristics and Sex Characterizations.” Applied Psychology An International Review, Vol. 36 (1987), pp. 25-38. Mahoney, T.A., B. Rosen, and S.L. Rynes. “Where Do Compensation Specialists Stand on Comparable Worth?” Compensation Review, Vol. 16 (1984), pp. 27-40. Majeres, R.L. “Sex Differences in Symbol-digit Substitution and Speeded Matching.” Intelligence, Vol. 7 (1983), pp. 313-327. Major, B. “Gender Differences in Comparisons and Entitlement: Implications for Comparable Worth.” Journal of Social Issues, Vol. 45 (1989), pp. 99-115. Major, B., and B. Forcey. “Social Comparisons and Pay Evaluations: Preferences for Same-sex and Same-job Wage Comparisons.” Journal of Experimental Social Psychology, Vol. 21 (1985), pp. 393-405. Major, B., and E. Konar. “An Investigation of Sex Differences in Pay Expectations and Their Possible Causes.” Academy of Management Journal, Vol. 27 (1984), pp. 777-792. Makela, C.J. “From Equality to Equity: The Path to Comparable Worth.” Educational Record, Vol. 66 (1985), pp. 14-18. Manese, W.R. Occupational Job Evaluation: A Research-based Approach to Job Classification. New York: Quorum Books, 1988. Mangum, S.L. “Comparable Worth and Pay Setting in the Public and Private Sectors.” Journal of Collective Negotiations, Vol. 17 (1988), pp. 1-12. Marini, M.M., and M.C. Brinton. “Sex Typing in Occupational Socialization,” Sex Segregation in the Workplace: Trends, Explanations, Remedies. Washington, D.C.: National Academy Press, 1984. Martin, J., et al. “Now That I Can Have It, I’m Not So Sure I Want It,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Martin, M.P., and J.D. Williams. “Effects of Statewide Salary Equity Provisions on Institutional Salary Policies: A Regression Analysis,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Mathews, J.J., W.E. Collins, and B.B. Cobb. “A Sex Comparison of Reasons for Attrition in Male-dominated Occupations.” Personnel Psychology, Vol. 27 (1974), pp. 535-541. Matlin, M.W. The Psychology of Women. New York: Holt, Reinhart, and Winston, 1987. Matthews, M.D. “Comparison of Supervisors and Incumbents, Estimates of the Worth of Workers to their Organizations: A Brief Report.” Perceptual and Motor Skills, Vol. 73 (1991), pp. 569-570. Matzer, J., Jr., ed. Pay and Benefits: New Ideas for Local Government. International City Management Association. Washington, D.C.: 1988. McArthur, L.Z. “Social Judgment Biases in Comparable Worth Analysis,” Comparable Worth. Washington, D.C.: National Academy Press, 1985. McArthur, L.Z., and S.W. Obrant. “Sex Biases in Comparable Worth Analyses.” Journal of Applied Social Psychology, Vol. 16 (1986), pp. 757-770. McConomy, S., and B. Ganschinietz. “Trends in Job Evaluation Practices of State Personnel Systems: 1981 Survey Findings.” Public Personnel Management Journal, Vol. 12 (1981), pp. 1-12. McCormick, E.J. “Job and Task Analysis,” Handbook of Industrial/organizational Psychology, 1st ed. Chicago: Rand McNally College Publishing Company, 1976. McCormick, E.J. Job Analyses: Methods and Applications. New York: Amacom, 1978. McCormick, E.J., P.R. Jeanneret, and R.C. Mecham. Position Analysis Questionnaire. Purdue Research Foundation, Contract No. 4497 Form B, 8-79. West Lafayette, IN: 1969. McCormick, E.J., P.R. Jeanneret, and R.C. Mecham. “A Study of Job Characteristics and Job Dimensions as Based on the Position Analysis Questionnaire (PAQ).” Journal of Applied Psychology Monograph, Vol. 56 (1972), pp. 347-368. McDonough, P., R. Snider, and J.P. Kaufman. “Male-female and White-minority Pay Differentials in a Research Organization,” Discrimination in Organizations: Using Social Indicators to Manage Social Change. San Francisco: Jossey-Bass, 1979. McDowell, D.S. “An Analysis of the National Academy of Sciences Comparable Worth Study,” Comparable Worth: Issues and Alternatives, 2nd ed. Washington, D.C.: Equal Employment Advisory Council, 1984. McElrath, K. “Gender, Career Disruption, and Academic Rewards.” Journal of Higher Education, Vol. 63 (1992), pp. 269-281. McLaughlin, S.D. “Occupational Sex Identification and the Assessment of Male and Female Earnings in Equality.” American Sociological Review, Vol. 43 (1978), pp. 909-921. McMahon, W.W., and A.B. Wagner. “Expected Returns to Investment in Higher Education.” The Journal of Human Resources, Vol. 16 (1981), pp. 274-285. McNulty, D.J. “Differences in Pay Between Men and Women Workers.” Monthly Labor Review, Vol. 90 (1967), pp. 40-43. McShane, S.L. “Two Tests of Direct Gender Bias in Job Evaluation Ratings.” Journal of Occupational Psychology, Vol. 63 (1990), pp. 129-140. Medoff, J.L., and K.G. Abraham. “Experience, Performance, and Earnings.” The Quarterly Journal of Economics, Vol. 95 (1980), pp. 703-736. Medoff, J.L., and K.G. Abraham. “Are Those Paid More Really More Productive? The Case of Experience.” The Journal of Human Resources, Vol. 16 (1981), pp. 186-216. Meeker, S.E. “Equal Pay, Comparable Work, and Job Evaluation.” The Yale Law Journal, Vol. 90 (1981), pp. 657-680. Mellon, P.M., N. Schmitt, and C. Bylenga. “Differential Predictability of Females and Males.” Sex Roles, Vol. 6 (1980), pp. 173-177. Meng, G.J. “All the Parts of Comparable Worth.” Personnel Journal, Vol. 69 (1990), pp. 99-104. Messmer, D.J., and R.J. Solomon. “Differential Predictability in a Selection Model for Graduate Students: Implications for Validity Testing.” Educational and Psychological Measurement, Vol. 39 (1979), pp. 859-866. Meyer, H.H. “Comparison of Foreman and General Foreman Conceptions of the Foreman’s Job Responsibilities.” Personnel Psychology, Vol. 12 (1959), pp. 445-452. Miceli, M.P. “Review of ’Comparable Worth: The Myth and the Movement.’” Personnel Psychology, Vol. 38 (1985), pp. 474-478. Michael, R.T., and H.I. Hartmann. “Pay Equity: Assessing the Issues,” Pay Equity: Empirical Inquiries. Washington, D.C.: National Academy Press, 1989. Miles, M.C. “Studies in Job Evaluation: Validity of a Check List for Evaluating Office Jobs.” Journal of Applied Psychology, Vol. 36 (1952), pp. 97-101. Milkman, R. Gender at Work: The Dynamics of the Job Segregation by Sex During World War II. Urbana, IL: University of Illinois Press, 1987. Milkovich, G.T. “The Emerging Debate,” Comparable Worth: Issues and Alternatives, 2nd ed. Washington, D.C.: Equal Employment Advisory Council, 1984. Milkovich, G.T., and J.M. Newman. Compensation, 4th ed. Homewood, IL: Irwin, 1993. Milkovich, G.T., and A.K. Wigdor, eds. Pay for Performance: Evaluating Performance Appraisal and Merit Pay. Washington, D.C.: National Academy Press, 1991. Millborn, S.L. A Secretary and a Cook: Challenging Women’s Wages in the Courts of the United States and Great Britain. New York: ILR Press, 1989. Miller, A.R., et al., eds. Work, Jobs, and Occupations: A Critical Review of the Dictionary of Occupational Titles. Washington, D.C.: National Academy Press, 1980. Miller, M.M. Pay Equity in Ohio’s State Agencies: A Preliminary Report. Ohio’s Bureau of Employment Services, Women’s Division. Columbus, OH: 1984. Mincer, J. “The Distribution of Labor Incomes: A Survey With Special Reference to the Human Capital Approach.” Journal of Economic Literature, Vol. 8 (1970), pp. 1-26. Mincer, J., and S. Polachek. “Family Investments in Human Capital: Earnings of Women.” Journal of Political Economy, Vol. 82 (1974), pp. S72-S108. Miner, M.G. Job Evaluation Policies and Procedures. Bureau of National Affairs. Washington, D.C.: 1976. Modernizing Federal Classification: An Opportunity for Excellence. National Academy of Public Administration. Washington, D.C.: 1991. Montgomery, E., and W. Wascher. “Race and Gender Wage Inequality in Services and Manufacturing.” Industrial Relations, Vol. 26 (1987), pp. 284-290. Moore, W.J., D.K. Pearce, and R.M. Wilson. “The Regulation of Occupations and the Earnings of Women.” The Journal of Human Resources, Vol. 16 (1981), pp. 366-383. Moore, W.J. and J. Raisian. “Government Wage Differentials Revisited.” Journal of Labor Research, Vol. 12 (1991), 13-33. Moore, L.M., and A.U. Rickel. “Characteristics of Women in Traditional and Non-traditional Managerial Roles.” Personnel Psychology, Vol. 33 (1980), pp. 317-333. Moroney, J.R., ed. Income Inequality: Trends and International Comparisons. Lexington, MA: Lexington Books, 1979. Morse, P.K. “Detection of Sex-related Salary Discrimination: A Demonstration Using Constructed Data,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Motowidlo, S.J. “Relationship Between Self-rated Performance and Pay Satisfaction Among Sales Representatives.” Journal of Applied Psychology, Vol. 67 (1982), pp. 209-213. Mount, M.K., and R.A. Ellis. “Investigation of Bias in Job Evaluation Ratings of Comparable Worth Study Participants.” Personnel Psychology, Vol. 40 (1987), pp. 85-96. Mount, M.K., and R.A. Ellis. “Sources of Bias in Job Evaluation: A Review and Critique of Research.” Journal of Social Issues, Vol. 45 (1989), pp. 153-167. Moyer, K.L. Sex Differences in Vocational Aspirations and Expectations of Pennsylvania 11th Grade Students. Pennsylvania Department of Education, 1978. Muffo, J.A., L. Braskamp, and I.W. Langston, IV. “Equal Pay for Equal Qualifications? A Model for Determining Race or Sex Discrimination in Salaries,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Mulcahy, R.W., and J.E. Anderson. “The Bargaining Battleground Called Comparable Worth.” Public Personnel Management, Vol. 15 (1986), pp. 233-247. Mullins, W.C., and W.W. Kimbrough. “Group Composition as a Determinant of Job Analysis Outcomes.” Journal of Applied Psychology, Vol. 73 (1988), pp. 657-664. Mumford, M.D., et al. “Measuring Occupational Difficulty: A Construct Validation Against Training Criteria.” Journal of Applied Psychology, Vol. 72 (1987), pp. 578-587. Murphy, K.R. “Difficulties in a Statistical Control of Halo.” Journal of Applied Psychology, Vol. 67 (1982), pp. 161-164. Muson, H. “Hard-hat Women.” Across the Board, Vol. 18 (1981), pp. 12-18. Myers, J.H. “An Experimental Investigation of “Point” Job Evaluation.” Journal of Applied Psychology, Vol. 42 (1958), pp. 357-361. Myers, J.H. “Removing Halo from Job Evaluation Factor Structure.” Journal of Applied Psychology, Vol. 49 (1965), pp. 217-221. Nash, A.N., and S.J. Carroll. The Management of Compensation. Monterey, CA: Brooks/Cole Publishing Company, 1975. National Survey of Professional, Administrative, Technical, and Clerical Pay. U.S. Department of Labor, Bulletin 2271. Washington, D.C.: 1986. Naughton, T.J. “Effect of Female-linked Job Titles on Job Evaluation Ratings.” Journal of Management, Vol. 14 (1988), pp. 567-578. Neigenfind, J.L. “Position Accuracy Certification.” Public Personnel Management Journal, Vol. 11 (1982), pp. 213-218. Nelson, B.A., E.M. Opton, and T.E. Wilson. “Wage Discrimination and the “Comparable Worth” Theory in Perspective.” University of Michigan Journal of Law Reform, Vol. 13 (1980), pp. 1-301. Newman, W. “Pay Equity Emerges as a Top Labor Issue in the 1980s.” Monthly Labor Review, Vol. 105 (1982), pp. 49-51. Newman, W. “Statement to the Equal Pay Joint Committee.” Public Personnel Management Journal, Vol. 12 (1983), pp. 382-389. Niemi, A.W. “Discrimination Against Women Reconsidered.” American Journal of Economics and Sociology, Vol. 38 (1979), pp. 291-292. Nieva, V.F., and B.A. Gutek. “Sex Effects on Evaluation.” Academy of Management Review, Vol. 5 (1980), pp. 267-276. Note. “Equal Pay, Comparable Work, and Job Evaluation.” The Yale Law Journal, Vol. 90 (1981), pp. 657-680. Odel, D. “A Failed Experiment.” The National Law Journal, Vol. 14 (1991), pp. 13-14. Oldham, G.R., and H.E. Miller. “The Effect of Significant Others Job Complexity on Employee Reactions to Work.” Human Relations, Vol. 32 (1979), pp. 247-260. Olmstead, A.L., and S.M. Sheffrin. “The Medical School Admission Process: An Empirical Investigation.” The Journal of Human Resources, Vol. 16 (1981), pp. 459-467. Olney, P.B., Jr. “Meeting the Challenge of Comparable Worth: Part 1.” Compensation and Benefits Review, Vol. 19 (1987), pp. 34-44. Olson, C.A. “An Analysis of Wage Differentials Received by Workers on Dangerous Jobs.” Journal of Human Resources, Vol. 16 (1981), pp. 167-185. O’Neill, J. “Role Differentiation and the Gender Gap in Wage Rates,” Women and Work 1: An Annual Review. Beverly Hills, CA: Sage Publications, Inc., 1985. O’Neill, J., M. Brien, and J. Cunningham. “Effects of Comparable Worth Policy: Evidence from Washington State.” American Economic Review, Vol. 79 (1989), pp. 305-309. Oppenheimer, V. “The Sex Labeling of Jobs.” Industrial Relations, Vol. 7 (1968), pp. 219-234. Orazem, P.F., and J.P. Mattila. Comparable Worth and the Structure of Earnings: The Iowa Case. Iowa State University, 1987. Orazem, P.F., J.P. Mattila, and C.Y. Ruoh. “An Index Number Approach to the Measurement of Wage Differentials by Sex.” Journal of Human Resources, Vol. 25 (1990), pp. 125-136. Orazem, P.F., J.P. Mattila, and S.K. Weikum. “Comparable Worth Plans and Factor Point Pay Analysis in State Government.” Industrial Relations, Vol. 31 (1992), pp. 195-215. O’Reilly, A.P. “Skill Requirements: Supervisor-subordinate Conflict.” Personnel Psychology, Vol. 26 (1973), pp. 75-80. Otis, J.L., and R.H. Leukart. Job Evaluation: A Basis for Sound Wage Administration. Englewood Cliffs, NJ: Prentice Hall, 1954. Ott, M.D. “Retention of Men and Women Engineering Students.” Research in Higher Education, Vol. 9 (1978), pp. 137-150. Over, R. “Research Impact of Men and Women Social Psychologists.” Personality and Social Psychology Bulletin, Vol. 7 (1981), pp. 596-599. Paglin, M., and A.M. Rufolo. “Heterogeneous Human Capital, Occupational Choice, and Male-female Earnings Differences.” Journal of Labor Economics, Vol. 8 (1990), pp. 123-144. Paisner, A.M. “BLS Regional Offices: Contribution to Wage Programs.” Monthly Labor Review, Vol. 115 (1992), pp. 30-34. Palmer, P., and R.M. Spalter-Roth. Gender Practices and Employment: The Sears Case and the Issue of “Choice”. Institute of Women’s Policy Research. Washington, D.C. Paterson, T.T. Job Evaluation Volume 1—A New Method. London: Business Books Unlimited, 1972. Paterson, T.T. Job Evaluation Volume 2—A Manual for the Paterson Method. London: Business Books Unlimited, 1972. Paterson, T.T., and T.M. Husband. “Decision-making Responsibility: Yardstick for Job Evaluation.” Compensation Review, Vol. 2 (1970), pp. 21-31. Patten, T.H. Fair Pay: The Managerial Challenge of Comparable Job Worth and Job Evaluation. San Francisco: Jossey-Bass, 1988. Patton, J.A., C.L. Littlefield, and S.A. Self. Job Evaluation: Text and Cases, 3rd ed. Homewood, IL: Richard D. Irwin, 1964. Paul, E.F. Equity and Gender: The Comparable Worth Debate. New Brunswick, NJ: Transaction Publishers, 1989. Pay Equity in Ohio’s State Agencies. State of Ohio, Ohio’s Bureau of Employment Services, Women’s Division. Columbus, OH: 1986. Pay Equity: The Minnesota Experience. Commission on the Economic State of Women, 1989. Penner, M. “How Job-based Classification Systems Promote Organizational Ineffectiveness.” Public Personnel Management, Vol. 12 (1983), pp. 268-276. Perrin, S.M. Comparable Worth and Public Policy: The Case of Pennsylvania. Philadelphia: Industrial Research Unit, The Wharton School, University of Pennsylvania, 1985. Perrucci, C.C., and D.B. Targ. “Early Work Orientation and Later Situational Factors as Elements of Work Commitment Among Married Women College Graduates.” The Sociological Quarterly, Vol. 19 (1978), pp. 266-280. Personnel Research Bibliography on Job Evaluation. U.S. Office of Personnel Management. Washington, D.C.: 1990. Perspectives on Working Women: A Data Book. U.S. Department of Labor, Bureau of Labor Statistics, Bulletin 2080. Washington, D.C.: 1980. Peters, L.H., et al. “Sex Bias and Managerial Evaluations: A Replication and Extension.” Journal of Applied Psychology, Vol. 69 (1984), pp. 349-352. Peterson, J. “The Challenge of Comparable Worth: An Institutionalist View.” Journal of Economic Issues, Vol. 24 (1990), pp. 605-612. Pezzullo, T.R., and B.E. Brittingham, ed. Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Pezzullo, T.R., and B.E. Brittingham. “The Assessment of Salary Equity: A Methodology, Alternatives, and a Dilemma,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Pfeffer, J., and J. Ross. “Union-nonunion Effects on Wage and Status Attainment.” Industrial Relations, Vol. 19 (1980), pp. 140-151. Pfeffer, J., and J. Ross. “The Effects of Marriage and a Working Wife on Occupational and Wage Attainment.” Administrative Science Quarterly, Vol. 27 (1982), pp. 66-80. Pfeffer, J., and J. Ross. “Gender-based Wage Differences: The Effects of Organizational Context.” Work and Occupations, Vol. 17 (1990), pp. 55-78. Phillips, A., and B. Taylor. “Sex and Skill: Notes Towards a Feminist Economics.” Feminist Review, Vol. 6 (1980), pp. 79-88. Phillips, M.D., and R.L. Pepper. “Shipboard Fire-fighting Performance of Females and Males.” Human Factors, Vol. 24 (1982), pp. 277-283. Piliavin, J.A., and R.K. Unger. “The Helpful but Helpless Female: Myth or Reality,” Women, Gender, and Social Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, 1985. Polachek, S.W. “Potential Biases in Measuring Male-female Discrimination.” Journal of Human Resources, Vol. 10 (1975), pp. 205-229. Polachek, S.W. “Sex Differences in College Major.” Industrial and Labor Relations Review, Vol. 31 (1978), pp. 498-508. Polachek, S.W. “Occupational Self-selection: A Human Capital Approach to Sex Differences in Occupational Structure.” Review of Economics and Statistics, Vol. 63 (1981), pp. 60-69. Pommerenke, P.L. “Comparable Worth: A Panacea for Discrimination Against Women in the Labor Market?” American Economist, Vol. 32 (1988), pp. 44-48. Powers, T.N. “An Idea With a Long Way to Go.” American Bar Association Journal, Vol. 70 (1984), pp. 16-21. Pratt, L.J. “Local Government and Comparable Worth: A Case Study.” Document, City of Chattanooga Study. Chattanooga, TN: 1988. Pratt, L.J., S.A. Smullen, and B.L. Kyer. “The Macroeconomics of the Equal Pay Act.” Journal of Macroeconomics, Vol. 12 (1990), pp. 675-689. Prediger, D.J. “The Determination of Holland Types Characterizing Occupational Groups.” Journal of Vocational Behavior, Vol. 16 (1980), pp. 33-42. Prien, E.P., G.V. Barrett, and B. Svetlik. “Use of Questionnaires in Job Evaluation.” Journal of Industrial Psychology, Vol. 3 (1965), pp. 91-94. Prien, E.P., I.L. Goldstein, and W.H. Macey. “Multidomain Job Analysis: Procedures and Application.” Training and Development Journal, Vol. 41 (1987), pp. 68-72. Prien, E.P., and S.D. Saleh. “A Study of Bias in Job Analysis.” Journal of Industrial Psychology, Vol. 22, pp. 113-117. Primack, R.B., and V.E. O’Leary. “Research Productivity of Men and Women Ecologists: A Longitudinal Study of Former Graduate Students.” The Ecological Society of America Bulletin, Vol. 70 (1989), pp. 7-21. Ragan, J.F., and S.P. Smith. “The Impact of Differences in Turnover Rates on Male-female Pay Differentials.” Journal of Human Resources, Vol. 16 (1981), pp. 343-365. Ramsey, G.A. “A Generalized Multiple Regression Model for Predicting College Faculty Salaries and Estimating Sex Bias,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Ratner, R.S., ed. Equal Employment Policy for Women: Strategies for Implementation in United States, Canada, Western Europe. Philadelphia: Temple University Press, 1980. Ray, J.M., and A.B. Rubin. “Pay Equity for Women in Academic Libraries: An Analysis of ARL Salary Surveys, 1976/77-1983/84.” College and Research Libraries, Vol. 48 (1987), pp. 36-49. Raymond, R.D., M.L. Sesnowitz, and D.R. Williams. “Does Sex Still Matter? New Evidence From the 1980s.” Economic Inquiry, Vol. 26 (1988), pp. 43-58. Rea, L.M., and R.A. Parker. Designing and Conducting Survey Research: A Comprehensive Guide. San Francisco: Jossey-Bass Publishers, 1992. Reichenberg, N.E. “Pay Equity in Review.” Public Personnel Management, Vol. 15 (1986), pp. 211-231. Reis, H.T., and L.A. Jackson. “Sex Differences in Reward Allocation: Subjects, Partners, and Tasks.” Journal of Personality and Social Psychology, Vol. 40 (1981), pp. 465-478. Remick, H. “The Comparable Worth Controversy.” Public Personnel Management, Vol. 10 (1981), pp. 371-383. Remick, H. “An Update on Washington State.” Public Personnel Management Journal, Vol. 12 (1983), pp. 390-394. Remick, H. “Dilemmas of Implementation: The Case of Nursing,” Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Remick, H. “Major Issues in A Priori Applications,” Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Remick, H., ed. Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Remick, H. “The Case of Comparable Worth in Washington State.” Policy Studies Review, Vol. 5 (1986), pp. 838-848. Remick, H., and R.J. Steinberg. “Technical Possibilities and Political Realities: Concluding Remarks,” Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Report of Wisconsin’s Task Force on Comparable Worth. University of Wisconsin-Madison, Wisconsin Task Force on Comparable Worth. Madison, WI: 1986. Research on Evaluator Bias. Organizational Research and Development, Inc. Columbus, OH. Reskin, B.F., ed. Sex Segregation in the Workplace. Washington, D.C.: National Academy Press, 1984. Reskin, B.F., and H.I. Hartmann, eds. Women’s Work, Men’s Work: Sex Segregation on the Job. Washington, D.C.: National Academy Press, 1986. Reskin, B., et al. “Salary-setting Practices That Unfairly Disadvantage Women.” Academe, Vol. 78 (1992), pp. 32-35. Reskin, B.F., and P.A. Roos. Job Queues, Gender Queues: Explaining Women’s Inroads Into Male Occupations. Philadelphia: Temple University Press, 1990. Rhoads, S.E. Incomparable Worth: Pay Equity Meets the Market. Cambridge: Cambridge University Press, 1993. Rhynes, S.L., D.P. Schwab, and H.G. Heneman, III. “The Role of Pay and Market Pay Variability in Job Application Decisions.” Organizational Behavior and Human Performance, Vol. 31 (1983), pp. 353-364. Rice, R.W., D. Instone, and J. Adams. “Leader Sex, Leader Success, and Leadership Process: Two Field Studies.” Journal of Applied Psychology, Vol. 69 (1984), pp. 12-31. Risher, H. “Job Evaluation: Problems and Prospects.” Personnel, Vol. 61 (1984), pp. 53-66. Risher, H. and C. Fay. “Federal Pay Reform: A Response to an Emerging Crisis.” Public Personnel Management, Vol. 20 (1991), pp. 385-395. Ritchie, R.J., and V.D. Beardsley. “A Market Research Approach to Determine Local Labor Market Availability for Non-management Jobs.” Personnel Psychology, Vol. 31 (1978), pp. 449-459. Rizzo, A., and C. Mendez. The Integration of Women in Management: A Guide for Human Resources and Management Development Specialists. New York: Quorum Books, 1990. Roback, J. A Matter of Choice: A Critique of Comparable Worth by a Skeptical Feminist. New York: Priority Press Publication, 1986. Robertson, T.M. “Fundamental Strategies for Wage and Salary Administration.” Personnel Journal, Vol. 65 (1986), pp. 120-132. Robinson, D.D., O.W. Wahlstrom, and R.C. Mecham. “Comparison of Job Evaluation Methods: A ‘Policy-capturing’ Approach Using the Position Analysis Questionnaire.” Journal of Applied Psychology, Vol. 59 (1974), pp. 633-637. Robinson, M.D., and P.V. Wunnava. “Measuring Direct Discrimination in Labor Markets Using a Frontier Approach: Evidence from CPS Female Earnings Data.” Southern Economic Journal, Vol. 56 (1989), pp. 212-218. Rock, M.L., ed. Handbook of Wage and Salary Administration, 2nd ed. New York: McGraw-Hill, 1984. Rock, M.L., and L.A. Berger, eds. A Compensation Handbook. New York: McGraw-Hill, 1991. Rogers, R.C. “Analysis of Two Point-rating Job Evaluation Plans.” Personnel Psychology, Vol. 30 (1946), pp. 579-585. Roos, P.A. “Sex Stratification in the Workplace: Male-female Difference in Economic Returns to Occupation.” Social Science Research, Vol. 10 (1981), pp. 195-223. Roose, J.E., and M.E. Doherty. “A Social Judgement Theoretic Approach to Sex Discrimination in Faculty Salaries.” Organizational Behavior and Human Performance, Vol. 22 (1978), pp. 193-215. Root, N. and J.R. Daley. “Are Women Safe Workers? A New Look at the Data.” Monthly Labor Review, Vol. 103 (1980), pp. 3-10. Rosenbaum, J.E. “Organizational Career Mobility: Promotion Chances in a Corporation During Periods of Growth and Contraction.” American Journal of Sociology, Vol. 85 (1979), pp. 21-48. Rosenbaum, J.E. “Tournament Mobility: Career Patterns in a Corporation.” Administrative Science Quarterly, Vol. 24 (1979), pp. 220-241. Rosenbaum, J.E. “Hierarchical and Individual Effects on Earnings.” Industrial Relations, Vol. 19 (1980), pp. 1-14. Rosengren, K.E., ed. Advances in Content Analysis. Beverly Hills, CA: Sage, 1981. Rotter, N.G. “Perceived Commitment, Salary Recommendation, and Employees’ Sex.” Perceptual and Motor Skills, Vol. 64 (1987), pp. 651-658. Rousseau, D.M. “Job Perceptions When Working With Data, People, and Things.” Journal of Occupational Psychology, Vol. 55 (1982), pp. 43-52. Ruderfer, E.D. “Sex-based Wage Discrimination Under Title VII: Equal Pay for Equal Work or Equal Pay for Comparable Work?” William and Mary Law Review, Vol. 22 (1981), pp. 421-485. Rumberger, R.W. “The Changing Skill Requirements of Jobs in the U.S. Economy.” Industrial and Labor Relations Review, Vol. 34 (1981), pp. 578-590. Rush, C.H., and R.M. Bellows. “Job Evaluation for a Small Business.” Personnel Psychology, Vol. 2 (1949), pp. 301-310. Rynes, S.L., and G.T. Milkovich. “Wage Surveys: Dispelling Some Myths About the ‘Market Wage.’” Personnel Psychology, Vol. 39 (1986), pp. 71-90. Rynes, S.L., C.L. Weber, and G.T. Milkovich. “Effects of Market Survey Rates, Job Evaluation, and Job Gender on Job Pay.” Journal of Applied Psychology, Vol. 74 (1989), pp. 114-123. Rytina, N.F. “Occupational Segregation and Earnings Differences by Sex.” Monthly Labor Review, Vol. 104 (1981), pp. 49-53. Rytina, N.F. “Earnings of Men and Women: A Look at Specific Occupations.” Monthly Labor Review, Vol. 105 (1982), pp. 25-31. Rytina, N.F., and S.M. Bianchi. “Occupational Reclassification and Changes in Distribution by Gender.” Monthly Labor Review, Vol. 107 (1984), pp. 11-17. Sackett, P.R., E.T. Cornelius, and T.J. Carron. “A Comparison of Global Judgement vs. Task-oriented Approaches to Job Classification.” Personnel Psychology, Vol. 34 (1981), pp. 791-804. Sahl, R.J. “How to Install a Job-evaluation.” Personnel, Vol. 66 (1989), pp. 38-42. Sanborn, H. “Paid Differences Between Men and Women.” Industrial and Labor Relations Review, Vol. 17 (1964), pp. 534-550. Sanchez, J.I., and S.L. Fraser. “On the Choice of Scales for Task Analysis.” Journal of Applied Psychology, Vol. 77 (1992), pp. 545-553. Sanchez, J.I., and E.L. Levine. “Determining Important Tasks Within Jobs: A Policy Capturing Approach.” Journal of Applied Psychology, Vol. 78 (1989), pp. 336-342. Sandy, P.R. Female Power and Male Dominance: On the Origins of Sexual Inequality. New York: Cambridge University Press, 1981. Sape, G.P. “Coping with Comparable Worth.” Harvard Business Review, Vol. 63 (1985), pp. 145-152. Satter, G.A. “Method of Paired Comparisons and a Specification Scoring Key in the Evaluation of Jobs.” Journal of Applied Psychology, Vol. 33 (1949), pp. 212-221. Sauer, R.L. “Measuring Relative Worth of Managerial Positions.” Compensation Review, Vol. 4, pp. 9-18. Sayles, L.R. “Worker Values in Job Evaluation: Impact of Job Evaluation on Worker Attitudes.” Personnel, Vol. 30 (1954), pp. 266-274. Scanlan, J.P. “Illusions of Job Segregation.” Public Interest, Vol. 39 (1988), pp. 54-69. Schein, V.E. “Relationships Between Sex Role Stereotypes and Requisite Management Characteristics Among Female Managers.” Journal of Applied Psychology, Vol. 60 (1975), pp. 340-344. Schleifer, L.M., and O.G. Okogbaa. “System Response Time and Method of Pay: Cardiovascular Stress Effects in Computer-based Tasks.” Ergonomics, Vol. 33 (1990), pp. 1495-1509. Schmitt, N., B.W. Coyle, and P.M. Mellon. “Subgroup Differences in Predictor and Criterion Variances and Differential Validity.” Journal of Applied Psychology, Vol. 63 (1978), pp. 667-672. Scholl, R.W., and E.A. Cooper. “The Use of Job Evaluation to Eliminate Gender Based Pay Differentials.” Public Personnel Management, Vol. 20 (1991), pp. 1-18. Schonberger, R., and H. Hennessey, Jr. “Is Equal Pay for Comparable Work Fair?” Personnel Journal, Vol. 60 (1981), pp. 964-968. Schroeder, P., and C. Horner. “Comparable Worth: A Wrong Turn.” The Bureaucrat, Vol. 16 (1987/88), pp. 4-9. Schuster, J. “How to Control Job Evaluation Inflation.” Personnel Administrator, Vol. 30, pp. 167-172. Schuster, M. “The Scanlon Plan: A Longitudinal Analysis.” Journal of Applied Behavioral Science, Vol. 20 (1984), pp. 23-38. Schwab, D.P. “Job Evaluation and Pay Setting: Concepts and Practices,” Comparable Worth: Issues and Alternatives, 2nd ed. Washington, D.C.: Equal Employment Advisory Council, 1984. Schwab, D.P. “Job Evaluation Research and Research Needs,” Comparable Worth: New Directions for Research. Washington, D.C.: National Academy Press, 1985. Schwab, D.P. “Contextual Variables in Employee Performance-turnover Relationships.” Academy of Management Journal, Vol. 34 (1991), pp. 966-975. Schwab, D.P., and R. Grams. “Sex-related Errors in Job Evaluation: A ‘Real-world’ Test.” Journal of Applied Psychology, Vol. 70 (1985), pp. 533-539. Schwab, D.P., and H.G. Heneman. “Assessment of a Consensus-based Multiple Information Source Job Evaluation System.” Journal of Applied Psychology, Vol. 71 (1986), pp. 354-356. Schwab, D., and C. Olson. “Merit Pay Practices,” Do Compensation Practices Matter? Ithaca, NY: ILR Press, 1990. Schwab, D.P., and D.W. Wichern. “Systematic Bias in Job Evaluation and Market Wages: Implications for the Comparable Worth Debate.” Journal of Applied Psychology, Vol. 68 (1983), pp. 60-69. Scott, J.A., and R. Zickefoose. “Achieving Comparable Worth in a Non-union Municipal Government Setting: The Chemistry’s Right in Colorado Springs.” Presented at the American Society for Public Administration Conference, 1983. Scozzaro, P.P., L.M. Subich. “Gender and Occupational Sex-type Differences in Job Outcome Factor Perceptions.” Journal of Vocational Behavior, Vol. 36 (1990), pp. 109-119. Seaman, F., and A. Lorimer. Winning at Work: A Book for Women. Philadelphia: Running Press, 1979. Seashore, H.G. “Women are More Predictable Than Men.” Journal of Counseling Psychology, Vol. 9 (1962), pp. 261-270. Shaffer, L.J., and R.M. Wilson. “Racial Discrimination in Occupational Choice.” Industrial Relations, Vol. 19 (1980), pp. 199-205. Shaw, K.L. “The Income Effects of Occupational Change and the Investment in Occupational Skills.” Ph.D. Thesis, Harvard University, 1981. Shimmin, S. “Job Evaluation and Equal Pay for Work of Equal Value.” Applied Psychology—An International Review, Vol. 36 (1987), pp. 61-70. Sibson, R.E. Compensation, 5th ed. New York: American Management Association, 1990. Sigelman, L., H.B. Milward, and J.M. Shepard. “The Salary Differential Between Male and Female Administrators: Equal Pay for Equal Work?” Academy of Management Journal, Vol. 25 (1982), pp. 664-671. Sinnott, P.A. “The Comparable Worth Controversy.” Journal of Career Planning and Employment, Vol. 45 (1985), pp. 46-51. Smith, B.N., et al. “What Is In a Name: The Impact of Job Titles on Job Evaluation Results.” Journal of Business and Psychology, Vol. 3 (1989), pp. 341-351. Smith, B.N., P.G. Benson, and J.S. Hornsby. “The Effects of Job Description Content on Job Evaluation Judgements.” Journal of Applied Psychology, Vol. 75 (1990), pp. 301-309. Smith, F.S. “Compensating Wage Differentials and Public Policy: A Review.” Industrial and Labor Relations Review, Vol. 32 (1979), pp. 339-352. Smith, J. “Comparable Worth, Gender, and Human Capital Theory,” Ingredients for Women’s Employment Policy. Albany: State University of New York Press, 1987. Smith, J.E., and M.D. Hakel. “Convergence Among Data Sources, Response Bias, and Reliability and Validity of a Structured Job Analysis Questionnaire.” Personnel Psychology, Vol. 32 (1979), pp. 677-691. Smyth, R.C. “How to Rank and Price Management Jobs.” Factory Management and Maintenance, Vol. 108 (1950), pp. 116-117. Snelgar, R.J. “The Comparability of Job Evaluation Methods in Supplying Approximately Similar Classifications in Rating One Job Series.” Personnel Psychology, Vol. 36 (1983), pp. 371-380. Solomon, R.J. “Determining the Fairness of Salary in Public Employment.” Public Personnel Management Journal, Vol. 8 (1980), pp. 154-159. Sorenson, E. (1989). “Measuring the Pay Disparity Between Typically Female Occupations and Other Jobs: A Bivariate Selectivity Approach.” Industrial and Labor Relations Review, Vol. 42, pp. 624-639. Spalter-Roth, R., and H. Hartmann. Raises and Recognition: Secretaries, Clerical Workers and the Union Wage Premium. Institute for Women’s Policy Research. Washington, D.C.: 1990. Spang, S.D. “Pay Equity by Committee: A Labor-management Case Study.” Cupa Journal, Vol. 41 (1990), pp. 21-34. Spangler, E., M.A. Gordon, and R.M. Pipkin. “Token Women: An Empirical Test of Kantor’s Hypothesis.” American Journal of Sociology, Vol. 84 (1978), pp. 160-170. Sparks, C.P. “Job Analysis,” Personnel Management. Boston: Allyn and Bacon, 1982. Spelfogel, E.J. “Equal Pay for Work of Comparable Value: A New Concept.” Labor Law Review, Vol. 32 (1981), pp. 30-39. Spence, J.T. and L.L. Sawin. “Images of Masculinity and Femininity: A Reconceptualization,” Women, Gender, and Social Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, 1985. Spriegel, W.R., and E. Lanham. “Job Evaluation in Department Stores.” Journal of Retailing, Vol. 27 (1951), pp. 79-85. Stanley, J.C., and A.C. Porter. “Correlation of Scholastic Aptitude Test Score With College Grades for Negroes Versus Whites.” Journal of Educational Measurement, Vol. 4 (1967), pp. 199-218. Steel, B.S., and N.P. Lovrich. “Comparable Worth: The Problematic Politicization of a Public Personnel Issue.” Public Personnel Management, Vol. 16 (1987), pp. 23-36. Steiger, Fink, and Kosecoff, Inc. Literature and Secondary Data Review of the Vocational Education Equity Study—Final Report. American Institutes for Research. Palo Alto, CA: 1979. Steinberg, R. “‘A Want of Harmony’: Perspectives on Wage Discrimination and Comparable Worth,” Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Steinberg, R.J. “Identifying Wage Discrimination and Implementing Pay Equity Adjustments,” Comparable Worth: Issues for the 80s. U.S. Commission on Civil Rights, ed. Washington, D.C.: 1984. Steinberg, R.J. “Evaluating Jobs.” Society, Vol. 22 (1985), pp. 44-54. Steinberg, R. “The Debate on Comparable Worth.” New Politics, Vol. 1 (1986), pp. 108-126. Steinberg, R. “Radical Challenges in a Liberal World: The Mixed Success of Comparable Worth.” Gender and Society, Vol. 1 (1987), pp. 466-475. Steinberg, R.J. “Social Construction of Skill: Gender, Power, and Comparable Worth.” Work and Occupations, Vol. 17 (1990), pp. 449-482. Steinberg, R., and L. Haignere. “Equitable Compensation: Methodological Criteria for Comparable Worth,” Ingredients for Women’s Employment Policy. Albany, NY: State University of New York Press, 1987. Steinberg, R., et al. The New York Pay Equity Study: A Research Report. Center for Women in Government, State University of New York at Albany, Albany, NY: 1986. Steinberg, R., and S. Shapiro. “Sex Differences in Personality Traits of Female and Male Master of Business Administration Students.” Journal of Applied Psychology, Vol. 67 (1982), pp. 306-310. Stewart, D.A. “Improving Job Evaluation Results.” Personnel, Vol. 25 (1949), pp. 356-365. Stigers, M.F., and E.G. Reed. The Theory and Practice of Job Rating. New York: McGraw-Hill, 1942. Stockard, J., et al. Sex Equity in Education. New York: Academic Press, 1980. Stokey, N.L. “Job Differentiation and Wages.” The Quarterly Journal of Economics, Vol. 95 (1980), pp. 431-449. Stroh, L.K., J.M. Brett, and A.H. Reilly. “All the Right Stuff: A Comparison of Female and Male Managers’ Career Progression.” Journal of Applied Psychology, Vol. 77 (1992), pp. 251-260. Study on Setting Salaries in the Classified Service. State of Nevada, Department of Personnel. Carson City, NV: 1985. Stutz, R.L. and H.E. Smalley. “Management, Union Join in Job Evaluation.” Personnel Journal, Vol. 34 (1950), pp. 412-416. Subich, L.M., et al. “Occupational Perceptions of Males and Females as a Function of Sex Ratios, Salary, and Availability.” Journal of Vocational Behavior, Vol. 28 (1986), pp. 123-134. Suits, D.B. “Dummy Variables: Mechanics v. Interpretation.” Review of Economics and Statistics, Vol. 66 (1984), pp. 177-180. Sullivan, J.F. “Comparable Worth and the Statistical Audit of Pay Programs for Illegal Systemic Discrimination.” Personnel Administrator, Vol. 30 (1985), pp. 102-111. Summers, T.P. “Examination of Sex Differences in Expectations of Pay and Perceptions of Equity in Pay.” Psychological Reports, Vol. 62 (1988), pp. 491-496. Summers, T.P., and W.H. Hendrix. “Modeling the Role of Pay Equity Perceptions: A Field Study.” Journal of Occupational Psychology, Vol. 64 (1991), pp. 145-157. Svetlik, B., E. Prien, and G. Barrett. “Relationships Between Job Difficulty, Employee’s Attitude Toward His Job, and Supervisory Ratings of the Employee Effectiveness.” Journal of Applied Psychology, Vol. 48 (1964), pp. 320-324. Szmania, J.M., and D. Doverspike. “A Test of the Differential Investment Hypothesis Applied to the Exploration of Gender Differences in Reactions to Pay.” Journal of Vocational Behavior, Vol. 37 (1990), pp. 239-250. Szwajklowski, E., and L. Larwood. “Rational Decision Processes and Sex Discrimination: Testing ‘Rational’ Bias Theory.” Journal of Organizational Behavior, Vol. 12 (1991), pp. 507-527. Taber, T.D., T.A. Beehr, and J.T. Walsh. “Relationships Between Job Evaluation Ratings and Self-ratings of Job Characteristics.” Organizational Behavior and Human Decision Processes, Vol. 35 (1985), pp. 27-45. Taber, T.D., and T.D. Peters. “Assessing the Completeness of a Job Analysis Procedure.” Journal of Organizational Behavior, Vol. 12 (1991), pp. 581-593. Taeuber, C. Statistical Handbook on Women in America. Phoenix, AZ: Oryx Press, 1991. Task Force Findings, Conclusions and Recommendations. Employees Pay Equity/Comparable Worth Task Force. Davis, CA: 1985. Taubman, P.J., and T.J. Wales. “Higher Education, Mental Ability, and Screening.” Journal of Political Economy, Vol. 81 (1973), pp. 28-55. Taylor, D.E. “Absences From Work Among Full-time Employees.” Monthly Labor Review, Vol. 104 (1981), pp. 68-70. Taylor, P.A. “Income Inequality in the Federal Civilian Government.” American Sociological Review, Vol. 44 (1979), pp. 468-479. Taylor, S.H. “The Case for Comparable Worth.” Journal of Social Issues, Vol. 45 (1989), pp. 23-37. The Factor Evaluation System of Position Classification Introduction. U.S. Civil Service Commission, Bureau of Standards of Policies. Washington, D.C.: 1976. Thomas, C. “Pay Equity and Comparable Worth.” Labor and Law Journal, Vol. 34 (1983), pp. 3-12. Thomas, P.J. “Appraising the Performance of Women: Gender and the Naval Office,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Thompkins, J. “Sources of Measurement Error and Gender Bias in Job Evaluation.” Review of Public Personnel Administration, Vol. 9 (1988), pp. 1-16. Thompson, B.W. National Longitudinal Study of the High School Class of 1977. U.S. Government Printing Office. Washington, D.C.: 1974. Thomsen, D.J. “Eliminating Pay Discrimination Caused by Job Evaluation.” Personnel, Vol. 55 (1978), pp. 11-22. Thomsen, D.J. “Compensation and Benefits.” Personnel Journal, Vol. 62 (1983), p. 38. Thorndike, E.L. Prediction of Vocational Success. The Commonwealth Fund. New York: 1934. Thurow, L.C. Generating Inequality: Mechanisms of Distribution in the U.S. Economy. New York: Basic Books, 1975. Tienda, M., and V. Ortiz. “Intraindustry Occupational Recomposition and Gender Inequality in Earnings,” Ingredients for Women’s Employment Policy. Albany: State University of New York Press, 1987. Tompkins, J., J. Brown, and J.H. McEwen. “Designing a Comparable Worth Based Job Evaluation System: Failure of an A Priori Approach.” Public Personnel Management, Vol. 19 (1990), pp. 31-42. Toops, H.A. “Some Concepts of Job Families and Their Importance in Placement.” Educational and Psychological Measurement, Vol. 5 (1945), pp. 195-216. Tornow, W.W. and P.R. Pinto. “The Development of a Managerial Job Taxonomy: A System for Describing, Classifying, and Evaluating Executive Positions.” Journal of Applied Psychology, Vol. 61 (1976), pp. 410-418. Tosi, H.L., R.J. House, and M.D. Dunnette, eds. Managerial Motivation and Compensation. Board of Trustees of Michigan State University. East Lansing, MI: 1972. Touthey, J.C. “Effects of Additional Women Professionals on Ratings of Occupational Prestige and Desirability.” Journal of Personality and Social Psychology, Vol. 29 (1974), pp. 86-89. Treiman, D.J. Job Evaluation: An Analytic Review. National Academy of Sciences. Washington, D.C.: 1979. Treiman, D.J. “Effect of Choice of Factors and Factor Weights in Job Evaluation,” Comparable Worth and Wage Discrimination: Technical Possibilities and Political Realities. Philadelphia: Temple University Press, 1984. Treiman, D.J., and H.I. Hartmann, eds. Women, Work, and Wages: Equal Pay for Jobs of Equal Value. Washington, D.C.: National Academy Press, 1981. Treiman, D.J., and K. Terrell. “Women, Work, and Wages—Trends in the Female Occupation Structure,” Social Indicator Models. Russell Sage Foundation. New York: 1975. Trusheim, D., and J. Crouse. “Affects of College Prestige on Men’s Occupational Status and Income.” Research in Higher Education, Vol. 14 (1981), pp. 283-304. Tuckman, B.H. “Salary Differences Among University Faculty and Their Implications for the Future,” Salary Equity: Detecting Sex Bias in Salaries Among College and University Professors. Lexington, MA: Lexington Books, 1979. Turner, W.D. “The Mathematical Basis of the Percent Method of Job Evaluation.” Personnel, Vol. 25 (1948), pp. 154-160. Turner, W.D. “Some Precautions in the Use of the Percent Method of Job Evaluation.” Journal of Applied Psychology, Vol. 33 (1949), pp. 547-552. Twenhafel, D. 1991 U.S. Senate Employment Practices: A Study of Staff Salary, Tenure, Demographics and Benefits. Congressional Management Foundation. Washington, D.C.: 1991. 20 Facts on Women Workers. U.S. Department of Labor, Office of the Secretary, Women’s Bureau. Washington, D.C.: 1980. Ulrich, C. “Why Companies are Looking at Omnibus Pay Programs.” Journal of Compensation and Benefits, Vol. 7 (1991), pp. 30-34. Ungson, G.R., and R.M. Steers. “Motivation and Politics in Executive Compensation.” Academy of Management Review, Vol. 9 (1984), pp. 313-323. Valdez, R.L., and B.A. Gutek. “Family Roles: A Help or a Hindrance for Working Women?” Women’s Career Development. Newbury Park, CA: Sage, 1987. van der Burg, R., and N. Schoemaker. “Scoring Women on Their Labor Chances.” Women’s Studies International Forum, Vol. 8 (1985), pp. 273-278. van DeVoort, D.M., J.J. McHenry, and N.E. Fried. “A Policy Capturing Approach to the Valuing of Managerial Jobs: Developing a Standardized, Computerized Job Grading System.” Presented at the 43rd Annual National Meeting of the Academy of Management, Dallas, TX, 1983. Vladeck, J.P. “Equal Access Is Not Enough.” American Bar Association Journal, Vol. 70 (1984), pp. 16-21. Viscusi, W.K. “Sex Differences in Worker Quitting.” Review of Economics and Statistics, Vol. 62 (1980), pp. 388-398. Viteles, M.S. “A Psychologist Looks at Job Evaluation.” Personnel, Vol. 17 (1941), pp. 165-176. Von Frank, J.A. “Equal Pay for Comparable Work.” Harvard Civil Rights Civil Liberties Law Review, Vol. 15 (1980), pp. 475-506. Walker, C.T. “The Use of Job Evaluation Plans in Salary Administration.” Personnel, Vol. 41 (1987), pp. 28-31. Wall, R.E. “Salary Inequities and Differences: One College’s Attempt at Identification and Adjustment,” Salary Equity Detecting Sex Bias in Salaries Among College and University Professors. Lexington MA: Lexington Books, 1979. Wallace, M.J. “Methodology, Research Practice, and Progress in Personnel and Industrial Relations.” Academy of Management Review, Vol. 8 (1983), pp. 6-13. Wallace, M.J., and C.H. Fay. Compensation Theory and Practice, 2nd ed. Boston: Pws-Kent Publishing Company, 1988. Wallace, P.A., ed. Equal Employment Opportunity and the AT&T Case. Cambridge, MA: The MIT Press, 1976. Wallace, P.A., and A.M. LaMond, eds. Women, Minorities, and Employment Discrimination. Lexington, MA: Lexington Books, 1977. Wallston, B.S., and K.E. Grady. “Integrating the Feminist Critique and the Crisis in Social Psychology: Another Look at Research Methods,” Women, Gender, and Social Psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, 1985. Ward, K.B., and C.W. Mueller. “Sex Differences in Earnings: The Influence of Industrial Sector Authority Hierarchy, and Human Capital Variables.” Work and Occupations, Vol. 12 (1985), pp. 437-463. Warr, P., and G. Parry. “Paid Employment and Women’s Psychological Well-being.” Psychological Bulletin, Vol. 91 (1982), pp. 498-516. Wasem, M.R. “The Comparable Worth Theory, a Critical Analysis.” Baylor Law Review, Vol. 32 (1980), pp. 629-638. Webb, N.M., et al. “Generalizability of General Education Development Ratings of Jobs in the United States.” Journal of Applied Psychology, Vol. 66 (1981), pp. 188-192. Weber, R.P. Basic Content Analysis. Newbury Park, CA: Sage, 1990. Werwie, D.M. Sex and Pay in the Federal Government: Using Job Evaluation Systems to Implement Comparable Worth. New York: Greenwood Press, 1987. Wesman, E.C. “Unions and Comparable Worth: Progress in the Public Sector.” Journal of Collective Negotiations, Vol. 17 (1988), pp. 13-26. White, M.C., M.D. Crims, and G.L. Desanctis. “Ratings of Prestige and Desirability: Effects of Additional Women Entering Selected Business Occupations.” Personality and Social Psychology Bulletin, Vol. 7 (1981), pp. 588-592. White, P.E. Women and Minorities in Science and Engineering: An Update. National Science Foundation, Report 303. Washington, D.C.: 1992. Whyte, W.F. Money and Motivation. New York: Harper and Row, 1955. Wilcox, A.C. A Method of Determining Equitable Pay. PAQ Services Research Bulletin 0001-384-120181. 1982. Wilcox, A.C. A Method of Determining Exemption Status. PAQ Services Research Bulletin 0002-001-062382. 1982. Willborn, S.L. A Secretary and a Cook: Challenging Women’s Wage in the Courts of the United States and Great Britain. Ithica, NY: ILR Press, 1989. Willborm, S.C. A Comparable Worth Primer. Lexington, MA: Lexington Books, 1986. Williams, C.L. Gender Differences at Work: Women and Men in Nontraditional Occupations. Berkeley, CA: University of California Press, 1989. Williams, D.R. and C.A. Register. “Regional Variations in Earnings and the Gender Composition of Employment: Is ‘Women’s Work’ Undervalued?” Journal of Economic Issues, Vol. 20 (1986), pp. 1121-1134. Williams, M.L., and G.F. Dreher. “Compensation System Attributes and Applicant Pool Characteristics.” Academy of Management Journal, Vol. 35 (1992), pp. 571-595. Williams, R.E., and L.L. Kessler. A Closer Look at Comparable Worth: A Study of the Basic Questions to be Addressed in Approaching Pay Equity. National Foundation for the Study of Equal Employment Policy. Washington, D.C.: 1984. Willis, N.D. Comparable Worth Study: State of Washington. Norman D. Willis and Associates. 1974. Wilson, M.A., and R.J. Harvey. “The Value of Relative-time-spent Ratings in Task-oriented Job Analysis.” Journal of Business and Psychology, Vol. 4 (1990), pp. 453-461. Wilson, M.A., R.J. Harvey, and B.A. Macy. “Repeating Items to Estimate the Test-retest Reliability of Task Inventory Ratings.” Journal of Applied Psychology, Vol. 75 (1990), pp. 158-163. Wise, D.A. “Academic Achievement and Job Performance.” American Economic Review, Vol. 65 (1975), pp. 350-366. Wisniewski, S.C. “Achieving Equal Pay for Comparable Worth Through Arbitration.” Employee Relations Law Journal, Vol. 8 (1982), pp. 236-255. Wittig, M.A., and S.L. Berman. “A Set of Validity Criteria for Modeling Job-based Compensation Systems.” Human Resource Management Review, Vol. 1 (1991), pp. 107-118. Wittig, M.A., and R.H. Lowe. “Comparable Worth Theory and Policy.” Journal of Social Issues, Vol. 45 (1989), pp. 1-21. Wolf, W.Z., and N.D. Fligstein. “Sex and Authority in the Work Place: The Causes of Sexual Inequality.” American Psychologist, Vol. 44 (1979), pp. 235-252. Wolins, L. “Bias in Rating, Unit of Analysis Mistake and Editorial Decisions.” Manuscript, Iowa State University, 1991. Women and Minorities in Science and Engineering. National Science Foundation. Washington, D.C.: 1982. Women in Management. U.S. Department of Labor, Women’s Bureau. Washington, D.C.: 1980. Wong, M.G., and C. Hirschman. “Labor Force Participation and Socioeconomic Attainment of Asian-American Women.” Sociological Perspectives, Vol. 26 (1983), pp. 423-446. Yanico, B.J., and S.I. Hardin. “Sex-role Self-concept and Persistance in a Traditional vs. Nontraditional College Major for Women.” Journal of Vocational Behavior, Vol. 18 (1981), pp. 219-227. Zabalza, A. and Z. Tzammatos. Women and Equal Pay: The Effects of Legislation on Female Employment and Wages in Britain. Cambridge: Cambridge University Press, 1985. Zanna, M.P., F. Crosby, and G. Loewenstein. “Male Reference Groups and Discontent Among Female Professionals,” Women’s Career Development. Newbury Park, CA: Sage, 1987. Zedeck, S., and K.L. Mosier. “Work in the Family and Employing Organization.” American Psychologist, Vol. 45 (1990), pp. 240-251. Ziering, B.A. and N.S. Raju. “Development and Validation of a Job Family Specific Position Analysis Questionnaire.” Journal of Business and Psychology, Vol. 2 (1988), pp. 228-238. Zippo, M. “Equal Pay for Comparable Work.” Personnel, Vol. 58 (1981), pp. 4-10. Zuckerman, H., J.R. Cole, and J.T. Bruer, eds. The Outer Circle: Women in the Scientific Community. New York: W.W. Norton and Company, 1991. Pay Equity: Experiences of Canada and the Province of Ontario (GAO/GGD-94-27BR, Nov. 2, 1993). Pay Equity: Washington State’s Efforts to Address Comparable Worth (GAO/GGD-92-87BR, July 1, 1992). Federal Pay: Comparisons With the Private Sector by Job and Locality (GAO/GGD-90-81FS, May 15, 1990). State Department: Minorities and Women are Underrepresented in the Foreign Service (GAO/NSIAD-89-146, June 26, 1989). Pay Equity: Status of State Activities (GAO/GGD-86-141BR, Sept. 19, 1986). Description of Selected Nonfederal Job Evaluation Systems (GAO/GGD-85-57, July 31, 1985). Comments on Report on Comparable Worth by the United States Commission on Civil Rights (GAO/GGD-85-59, June 14, 1985). Options for Conducting a Pay Equity Study of Federal Pay and Classification (GAO/GGD-85-37, Mar. 1, 1985). Distribution of Male and Female Employees in Four Federal Classification Systems (GAO/GGD-85-20, Nov. 27, 1984). Description of Selected Systems for Classifying Federal Civilian Positions and Personnel (GAO/GGD-84-90, July 13, 1984). Classification of Federal White-Collar Jobs Should Be Better Controlled (GAO/FPCD-75-173, Dec. 4, 1975). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the relationship between job content and General Schedule (GS) grades assigned using the Factor Evaluation System (FES), focusing on whether the relationship varied on the basis of the proportion of women and minorities in various occupations. GAO found that: (1) the difference between actual GS grades and the grades based on job content under FES was directly related to female and minority representation in nonsupervisory positions reviewed; (2) the likelihood of a position being overgraded increased as the incumbents' GS grades increased, but there was no correlation between undergrading and GS grades; (3) occupations with high female representation were more likely to be undergraded than those occupations with medium or low female representation; (4) occupations with high minority representation were more likely to be overgraded than those occupations with medium or low minority representation; (5) classification experts believed that FES did not place sufficient value on the physical demands and working conditions of certain specialist occupations with high minority representation, which caused them to be overgraded on a strict FES basis; (6) private sector wage levels may have resulted in overgraded positions in computer-related occupations; (7) the Office of Personnel Management (OPM) needs to monitor its development of a new federal job classification system to ensure that disparities are identified and addressed; and (8) OPM has established a new oversight office which will conduct a governmentwide classification study.
Joint education is provided to some extent at all levels of officer and enlisted professional military education. The professional development and career progression for both officers and enlisted personnel through DOD’s professional military education is a service responsibility; embedded within the professional military education systems is a program of JPME that is overseen by the Joint Staff. For officers, this system is designed to fulfill the educational requirements for joint officer management as mandated by law. The joint education program is intended to prepare aspiring military leaders for both conducting operations in a coherently joint force and thinking their way through uncertainty. Officer JPME program courses are taught at multiple sites across the country, including the service Staff and War Colleges and the National Defense University. See appendix II for a list and map of DOD’s academic institutions where officer JPME courses are taught. According to the Officer Professional Military Education Policy (OPMEP), the JPME program for officers comprises curriculum components in the five levels of the officer professional military education system and includes three-statutorily mandated levels of JPME designed to progressively develop the knowledge, analytical skills, perspectives, and values that are essential for U.S. officers to function effectively in joint, interagency, intergovernmental, and multinational operations. For each of those levels, the emphasis changes to provide instruction in target areas to enhance leader attributes of attending service members at their specific rank and tenure. Within officer PME, the first level of JPME—precommissioning—provides officer candidate and officer training school, Reserve Officer Training Corps, and Military Service Academy students with a basic foundation in defense structure, roles and missions of other military services, the combatant command structure, and the nature of American military power and joint warfare. The second level of JPME—primary—provides junior officers at the O-1 through O-3 ranks with primary education on the tactical level of war. This level of JPME helps to foster an understanding of the Joint Task Force combatant command structure and how national and joint systems support tactical-level operations, among other subjects. The third level begins the first statutorily-mandated level of JPME requirements—JPME Phase I. This phase generally focuses on tactical and operational levels of war, and is typically attended by intermediate- level officers at the O-4 rank, as well as some officers at the O-5 and O-6 ranks. JPME Phase I is incorporated into the curricula of the intermediate- and senior-level service colleges, as well as other appropriate educational programs that meet JPME criteria and are accredited by the Chairman of the Joint Chiefs of Staff. This level of education provides officers the opportunity to gain better understanding from a service component perspective of joint force employment at the operational and tactical levels of war. The subject matter to be covered by JPME Phase I instruction must, by law, include at the least (1) national military strategy; (2) joint planning at all levels of war; (3) joint doctrine; (4) joint command and control; and (5) joint force and joint requirements development. The fourth level of JPME provides Phase II of the statutorily-directed JPME requirements. JPME Phase II is a follow-on for selected graduates of service schools and other appropriate education programs that complements and enhances the JPME Phase I instruction. Phase II is taught at National Defense University’s National War College and Dwight D. Eisenhower School for National Security and Resource Management, the Joint Forces Staff College’s Joint and Combined Warfighting School and Joint Advanced Warfighting School to both intermediate- and senior-level (O-5 and O-6) students, and at senior-level service colleges to senior-level students, and consists of courses on the operational and strategic levels of war. Phase II helps prepare officers for high-level policy and command and staff responsibilities, with a focus on areas such as national security strategy and joint strategic leadership. In addition to the subjects specified in JPME Phase I, the curriculum for JPME Phase II must, by law, include: (1) national security strategy; (2) theater strategy and campaigning; (3) joint planning processes and systems; and (4) joint, interagency, intergovernmental, and multinational capabilities and the integration of those capabilities. This phase completes the educational requirement for joint officer management. The fifth and final level of instruction is the CAPSTONE course of JPME for general/flag officers and focuses on the operational and strategic levels of war for high-level joint, interagency, intergovernmental, and multinational responsibilities. Specifically, the CAPSTONE course focuses on (1) the fundamentals of joint doctrine; (2) integrating elements of national power across military operations to accomplish security and military strategies; and (3) how joint, interagency, intergovernmental, and multinational operations support strategic goals and objectives. This level of JPME is tiered to ensure the progressive and continuous development of executive level officers. Figure 1 summarizes the five levels of JPME. In addition to meeting legislative requirements to provide education on JPME matters, DOD’s colleges and universities that provide academic year-long programs for officers are Master’s Degree-granting institutions. According to the Middle States Commission for Higher Education, the body that accredits National Defense University, among others, accreditation is the means of self-regulation and peer review adopted by the educational community. The accrediting process is intended to strengthen and sustain the quality and integrity of higher education, making it worthy of public confidence and minimizing the scope of external control. Accreditation by the Commission is based on the results of institutional reviews by peers and colleagues and attests to the judgment that the institution has met the following criteria: that it has a mission appropriate to higher education; that it is guided by well-defined and appropriate goals, including goals for student learning; that it has established conditions and procedures under which its mission and goals can be realized; that it assesses both institutional effectiveness and student learning outcomes, and uses the results for improvement; that it is substantially accomplishing its mission and goals; that it is organized, staffed, and supported so that it can be expected to continue to accomplish its mission and goals; and that it meets the Requirements of Affiliation and accreditation standards of the Middle States Commission on Higher Education. DOD also conducts periodic assessments of the three statutorily- mandated levels of officer JPME to ensure that the curricula meet the prescribed joint educational requirements at each level, and uses the results of these assessments to update educational policy as appropriate. For enlisted personnel, the Enlisted Professional Military Education Policy circulates the policies, procedures, objectives, and responsibilities for enlisted professional military education and enlisted JPME. According to that policy, enlisted professional development and progression through the enlisted military education continuum is a service responsibility. The initial focus of enlisted professional military education is military occupational specialty training and education required to produce enlisted personnel capable of performing assigned tasks. Beyond occupational specialty training, the Enlisted Professional Military Education Policy states that all enlisted personnel also should be exposed to enlisted JPME as they progress through their respective services’ enlisted professional military education system. In addition, the policy states that, for some enlisted personnel, more comprehensive joint education is required to prepare those servicemembers for specific joint assignments. Enlisted JPME includes programs that span an enlisted member’s career and apply to all enlisted personnel. Basic Enlisted JPME addresses educational guidelines that should be completed by pay grade E-6, while Career Enlisted JPME addresses educational guidelines for senior enlisted personnel in grades E-6 or E-7 and above. Beyond these programs, Senior Enlisted JPME is a stand-alone Web-based course designed specifically for senior enlisted personnel in pay grades E-6 through E-9 who are serving in or are slated to serve in joint organizations. DOD’s KEYSTONE course exists to prepare command senior enlisted leaders for service in flag level joint headquarters or joint task force organizations. This course is designed for personnel who are serving in pay grade E-9. The focus of this course is to enable command senior enlisted leaders to think intuitively from a joint perspective while serving in their capacity in a general/flag officer joint organization. DOD also developed a program that, since 2003, has provided JPME specifically to reserve officers. In response to legislative direction, the Joint Forces Staff College established a 40-week, blended learning Advanced Joint Professional Military Education program (AJPME) that consists of two distance learning periods and two face-to-face periods. Similar to the active component courses, the Officer Professional Military Education Policy outlines the requirements for AJPME. The program’s mission is to educate reserve component officers to plan and execute joint, interagency, intergovernmental, and multinational operations to instill a primary commitment to teamwork, attitudes, and perspectives. AJPME builds on the foundation established by the institutions teaching JPME Phase I and prepares reserve component officers (O-4 to O-6) for joint duty assignments. DOD’s JPME programs are approved by the Chairman of the Joint Chiefs of Staff. The Director for Joint Force Development retains responsibility to support the Chairman of the Joint Chiefs of Staff and the joint warfighter through joint force development, in order to advance the operational effectiveness of the current and future joint force. The Joint Staff’s Directorate for Joint Force Development (J7) oversees general policy for the JPME programs. The MECC, which was tasked with conducting the most recent DOD-initiated study on JPME, serves as an advisory body to the Director of the Joint Staff on joint education issues. The purpose of the MECC is to address key educational issues of interest to the joint education community, promote cooperation and collaboration among the MECC member institutions, and coordinate joint education initiatives. Its membership consists of the MECC principals and a supporting MECC Working Group. These principals include the Director for Joint Force Development; the Deputy Director of the Joint Staff for Military Education; the presidents, commandants, and directors of the joint professional military education colleges and service universities; and the heads of any other JPME-accredited institutions. Additional representatives from other commands and organizations may be invited to participate as is appropriate. The MECC Working Group is comprised of dean’s level/ O-6 representatives of the MECC principals. The Deputy Director of the Joint Education and Doctrine Division chairs the MECC Working Group. Additionally, the Secretary of Defense, service chiefs, and combatant commanders are invited to send participants to the MECC meetings to provide feedback to improve the educational process. The purpose of DOD’s issued study of the JPME program was to identify desired leader attributes by defining what is needed from the JPME career-long learning experience to support DOD’s strategic vision, as well as any gaps in the current educational program to facilitate the development of the leaders needed to achieve that vision. The original purpose and objectives for the study were laid out in a July 16, 2012, white paper from the Chairman of the Joint Chiefs of Staff and a Joint Staff memorandum issued with the white paper. In the white paper, the Chairman stated that the study’s purpose was to support the development of the department’s leaders by fostering the values, strategic vision, and critical thinking skills needed to lead and support the development of Joint Force 2020. This white paper also identified four attributes that the department’s education programs should develop in its leaders. Those attributes include the following: the ability to understand the security environment and the contributions of all elements of national power; the ability to deal with surprise and uncertainty; the ability to anticipate and recognize change and lead transitions; and the ability to operate on intent through trust, empowerment, and understanding. The Chairman’s white paper also stated that other attributes for leader development would evolve and would need to be aligned with future operations and incorporated into curricula to help ensure that gaps are identified and eliminated. The memorandum, signed by the Director for Joint Force Development, identified three questions meant to shape the review. The first question focused on the value of joint education, why it is needed, and how the military will train and educate its leaders to meet the requirements of Joint Force 2020. The second question focused on the educational outcomes sought by the department, specifically with regard to training leaders for carrying out DOD’s strategic vision, in order to help ensure alignment of desired leader attributes with the curriculum. The third question built upon the first two by focusing on any changes needed to strengthen and achieve the value and outcomes DOD seeks in building leaders for Joint Force 2020. The memorandum also instructed DOD’s MECC to conduct the study. Subsequent to the issuance of the white paper and memorandum, the MECC met on August 23, 2012, to refine the study’s specific objectives. According to officials, at the direction of the Director for Joint Force Development, the MECC removed the requirement to address the value of joint education and instead focused its efforts on identifying the intended outcomes of the educational program and actions needed to achieve the outcomes. The MECC’s discussions also established two broad objectives for the study—documented in meeting minutes and an enclosure—based on the direction in the memorandum and further direction received by the Chairman. The two broad objectives the MECC established were to: 1. define the Joint Education Continuum needed to meet the joint leader development goals for Joint Force 2020, and 2. determine the gaps that need to be addressed for the Joint Education enterprise to progress from the current joint leader development outcomes to producing the future desired joint leader attributes for Joint Force 2020. The evolution of the study’s objectives is described below in figure 2. The MECC’s first objective focused on defining and improving upon both the officer and enlisted joint education continuums to ensure that they are in line with the development goals for DOD’s strategic vision. The MECC identified a need to retain many aspects of the current officer joint education continuum, as well as a need to incorporate new elements into the proposed officer joint education continuum that would help to meet the changing needs of future leaders of Joint Force 2020. For example, the proposed officer continuum retains the comprehensive approach of embedding elements of JPME at each level of service-delivered education with an emphasis on the statutory requirements for JPME Phases I and II, as well as key milestones from the precommissioning level of JPME to the general/flag officer rank. In addition to retaining the existing institutional structure for providing JPME education, however, in its report the MECC also emphasized the importance of self-directed, career-long learning and development. This facet of officer development is intended to convey an expectation that an individual is responsible for his or her own education and development, thereby inculcating a culture of lifelong learning. The MECC’s study concluded, on the other hand, that changes to the enlisted JPME continuum—specifically, between the Senior Enlisted JPME and KEYSTONE courses— are needed to improve the depiction of the Chairman’s intent for enlisted professional military education. Currently, enlisted personnel who complete the Senior Enlisted JPME course at pay grade E-6 or E-7 are not eligible to attend the KEYSTONE course until they attain pay grade E-9, potentially leaving a significant interval in time between participation in the two courses. Accordingly, the MECC recommended dividing the existing Senior Enlisted JPME course into two parts. Both parts would be offered prior to the KEYSTONE course and reduce the interval of time that senior enlisted servicemembers must wait between phases of enlisted joint education. The second objective expanded upon the first, and focused on identifying and closing gaps in the current JPME curriculum to produce joint leaders with attributes that are in line with Joint Force 2020. Specifically, the MECC refined the four attributes proposed in the Chairman’s white paper, added two additional attributes, and identified a set of subattributes associated with each of the six desired leader attributes, that the MECC used to analyze the current capabilities of the department’s joint education programs. The refined list of desired leader attributes consists of the abilities to: 1. Understand the security environment and contributions of all instruments of national power, 2. Anticipate and respond to surprise and uncertainty, 3. Anticipate and recognize change and lead transitions, 4. Operate on intent through trust, empowerment, and understanding, 5. Make ethical decisions based on shared values of the profession of 6. Think critically and strategically in applying joint warfighting principles and concepts to joint operations. The MECC then conducted a gap analysis to identify and crosswalk the desired leader attributes for Joint Force 2020 with the current officer and enlisted personnel JPME education continuums. For officers, the MECC’s analysis showed that, while the JPME program currently addresses the desired leader attribute outcomes at some level and also meets the intent of the existing requirements identified by the Skelton Panel, gaps exist where changes are needed to meet the challenges of Joint Force 2020. These include changes to teaching methodologies, assessment mechanisms, and other areas in support of the newly-identified desired leader attributes. Table 1 summarizes where DOD has identified gaps in its officer JPME curricula associated with the identified list of desired leader attributes and subattributes. For enlisted personnel, the MECC determined that a separate and distinct set of desired leader attributes should be developed for senior enlisted leaders that will require additional study. Officials from the Joint Staff initiated this work on April 8, 2013 and assigned this task to DOD’s Enlisted Military Education Review Council (EMERC). The EMERC took up this effort at its August 22, 2013 meeting and proposed a list of six desired leader attributes for enlisted personnel. DOD officials provided us with the proposed list of enlisted desired leader attributes on September 16, 2013. The MECC made a total of 21 recommendations, which collectively address the study’s objectives and span four categories, including: (1) desired leader attributes, subattributes, or educational outcomes; (2) joint education continuums; (3) lifelong learning and advancements in learning technologies; and (4) faculty quality. See appendix III for a list of the 21 recommendations. Several of these recommendations call for additional study by the National Defense University and the services to support the achievement of leader attributes. Other recommendations emphasize the importance of strengthening the educational outcomes at the primary level of joint education, making education programs more accessible, and using prior learning assessments to tailor education opportunities to students’ needs, among other things. One recommendation, in particular, calls for undertaking an approach to develop a separate and distinct set of desired leader attributes to guide enlisted joint education. Additionally, the MECC considered whether to make any legislative proposals, but did not identify the need for any legislative changes to the joint education enterprise based on the results of the review. According to officials, the results of the study will be used ultimately to inform updates and revisions to the Officer Professional Military Education Policy and the Enlisted Professional Military Education Policy, which are the primary instructions that distribute the policies, procedures, objectives, and responsibilities for professional military education and JPME. Our next section discusses DOD’s plans in this area. The methodology DOD used for its study of JPME generally included leading practices for evaluating strategic training and other programs, such as reviewing existing literature, using available data, and assessing skills gaps. However, the department has not yet fully planned for follow- on actions or fully engaged all stakeholders to help ensure that they are held accountable for making progress in implementing the study’s recommendations. Specifically, the department has not taken steps to ensure that the results of the study will be implemented using an approach agreed upon by all stakeholders, or that the results will be implemented in a timely manner. Further, the department has not yet fully evaluated the potential costs associated with the implementation of the MECC’s recommendations to provide decision makers with more complete information and assurance that the recommendations will be cost-effective. According to our 2012 report, Designing Evaluations, a key first step in designing a program evaluation is to conduct a literature review in order to understand the program’s history, related policies, and knowledge base. A review of the relevant policy literature can help focus evaluation questions on knowledge gaps, identify design and data collection options used in the past, and provide important context for the requester’s questions. Further, according to the RAND Corporation’s Standards for High-Quality Research and Analysis, a study team should demonstrate an understanding of other related studies, which should be evident in, for example, how a problem is formulated and approached. The team should also take particular care to explain the ways in which its study agrees, disagrees, or otherwise differs importantly from previous studies. The MECC’s report states that the initial step in its review was to determine the JPME enterprise’s effectiveness in meeting its current requirements. According to an official, the MECC reviewed many documents as part of a literature review, but derived its findings from four specific sources—all issued within the last 5 years—that it determined were most relevant to the review. Specifically, these sources were: The 2010 House Armed Services Committee (HASC), Subcommittee on Oversight and Investigations (O&I), study which provided a detailed assessment of the state of JPME; A Decade of War, conducted by the Joint and Coalition Operational Analysis Division within the Joint Staff (J7); The Ingenuity Gap: Officer Management for the 21st Century; and Keeping the Edge: Revitalizing America’s Military Officer Corps. According to the MECC’s report, these documents collectively provide a broad look across joint education. The MECC found that the 2010 HASC report, for example, provided a detailed assessment of the state of JPME, and concluded that while the overall PME system was basically sound, some areas needed improvement, such as an increased emphasis on joint, interagency, intergovernmental, and multinational operations. Consistent with this finding, the MECC made a recommendation that the Joint Professional Military Education Division review specific subject areas for increased emphasis within joint education as part of the process of revising the officer and enlisted policies. These subject areas include cyber warfare, interagency and intergovernmental operations, and information and economic instruments of national power, among others. Further, these sources suggested that responsibilities for junior officers will increase and that joint education should be expanded at lower levels. Similarly, the MECC found that officers are receiving joint education exposure earlier in their careers, and also recommended that the services strengthen the instruction of the desired leader attributes at the primary level of joint education for junior officers. Additionally, these sources recognized the need for increased emphasis on handling uncertainty and critical thinking skills. DOD’s study also focused on the ability to anticipate and respond to surprise and uncertainty, and critical thinking in joint operations. RAND’s Standards report further indicates that the data and information used for a study should be the best available and that a research team should indicate limitations in the quality of the available data. As part of its second objective, the MECC analyzed data from joint accreditation visits, that provided broad areas of assessment, and institutional survey and outcome assessments, that provided more detailed observations related to curricula. More specifically, the MECC conducted a review of the Process for the Accreditation of Joint Education (PAJE) reaffirmation results and institutional self-studies for the past 10 years. DOD uses these studies to document compliance with the Officer Professional Military Education Policy (OPMEP) standards and identify areas for improvement. These reviews have not identified systemic problems or required changes of PME over the past 10 years, and have concluded that PME institutions are successful in meeting policy and curricula standards, as well as the intent of the Skelton Report. The MECC also obtained and analyzed existing survey data—from various JPME institutions—of students, graduates, and supervisors of graduates as an indirect assessment of the programs to identify common areas and trends in JPME, as well as outcome assessments such as exams, quizzes, oral presentations, among other things, from the past 2 years as a more direct measurement of student learning. Consistent with the aforementioned RAND Standards report, while the MECC did leverage existing data, the MECC’s report also presented limitations of the existing survey data that it used for the study, stemming from variations in survey questions and measures that the separate JPME institutions used to conduct their own surveys. Regarding gap assessments, our 2004 Guide for Assessing Strategic Training and Development Efforts in the Federal Government highlights the need for determining the skills and competencies necessary to meet current and future challenges, and to assess any skill and competency gaps. According to the MECC’s report, to determine the ability of joint education to meet the Joint Force 2020 joint leader development requirements, the MECC conducted a three-phase gap analysis. These three phases included: (1) refinement of the desired leader attributes, which included breaking these attributes down into subattributes; (2) analysis of data provided by 27 officer and enlisted programs across the JPME system to identify gaps in the JPME program; and (3) gap analysis and documentation to break down the data into cohorts and rank the desired leader attributes. Specifically, 27 officer and enlisted programs, which included the services’ staff and war colleges and National Defense University, among others, were asked to assess the current capabilities of their joint education programs to achieve Joint Force 2020 outcomes by addressing the following three questions: Are current instructional methods effective, that is, how well do we Is the curriculum effective, that is, do we teach it? teach and deliver it? Are we achieving the educational outcomes, that is, can we measure it? According to its report, the MECC relied on the professional opinion of the academic deans of the JPME schools and directors of other joint education programs to make determinations about the effectiveness of their officer and senior enlisted programs based on the 35 subattributes identified by the MECC. To conduct the gap analysis, the MECC analyzed the 945 responses it received from the 27 programs and used a stoplight color coding scheme to summarize by cohort group each program’s assessment of its current capabilities to achieve the Joint Force 2020 outcomes. The gap analysis results were then coded, scored and ranked by officer cohort group (precommissioning, intermediate, senior, CAPSTONE) using four categories to describe the program’s current capabilities. In turn, this gap analysis helped inform the MECC’s 21 recommendations. While the MECC conducted a similar gap analysis for its senior enlisted JPME programs, the results of that analysis led the MECC to conclude that the department needed to further reconsider and redefine the desired leader attributes for enlisted personnel. The MECC has not yet fully developed timeframes and coordinated with all stakeholders, nor has it assessed the costs associated with implementing any of the study’s recommendations. As a result, it is not clear how the department will obtain concurrence among its stakeholders regarding an approach for moving forward, or how the department will hold its stakeholders accountable for implementing change. Furthermore, the department will not know whether the implementation of its recommendations will be cost effective. We have previously found that project planning is the basis for controlling and managing project performance, including managing the relationship between cost and time. Our 2004 Training Guide highlights the need for agencies to develop a formal process to help ensure that strategic and tactical changes are promptly incorporated in training and development efforts. Specifically, that report advises that agencies develop plans that describe or outline the way in which the agency intends to incorporate strategic and tactical changes into its training and developmental efforts. Our guide also highlights that including important agency stakeholders in the process can contribute to an open and continuous exchange of ideas and information, particularly when it comes to ensuring that strategic and tactical changes are promptly incorporated into training and developmental efforts. According to the July 16, 2012 memorandum that directed the MECC study, the March 1, 2013 deadline for the results of the study was intended to help shape the fall 2013 academic year. The MECC completed its report on June 24, 2013, and provided us with a copy of the final report on July 1, 2013. Subsequently, officials provided us with an informational paper that identifies an office of primary responsibility and outlines details on the status of the recommendations. However, this paper does not fully lay out DOD’s planned actions in a transparent manner. For nine of the recommendations, the document identifies potential target dates for completion, one of which is to be completed later in 2013, and eight of which are to be completed not later than September 2014. For the remaining 12 recommendations, DOD identified a target date for completion as either “not applicable” or “to be determined” because most of those recommendations fall under the purview of the services, the services’ schools, or National Defense University. Accordingly, officials states that consideration or implementation of any recommendations that fall under the purview of these stakeholders is at the discretion of those organizations and, according to officials, the department cannot identify or impose timeframes for implementing those recommendations. We recognize that the implementation of several of the recommendations fall under the purview of the services, the services’ schools, or National Defense University. However, the MECC has not yet reached out to these entities to identify estimated target dates for implementation to help guide these efforts. Further, the document does not identify interim milestones for specific actions between the date of the report and the target completion dates. The document also states that completion or implementation of a number of the recommendations is contingent on revisions to the Enlisted Professional Military Education Policy or the Officer Professional Military Education Policy, which officials estimated may take 12-18 months. Officials did tell us that the Joint Staff’s Joint Professional Military Education Division staff and members of the MECC are currently in the beginning stages of incorporating the changes into policy and that the individual schools may begin to incorporate changes resulting from the study at any time. However, at the time of our review, the department’s process for monitoring the implementation of the MECC’s recommendations was unclear. Without established milestones and timeframes for taking follow-on action, and mechanisms for coordinating with all key officials and stakeholders and holding them accountable, DOD may not be reasonably assured that the study’s recommendations will be implemented across the department or that its JPME program will achieve the potential benefits resulting from these recommendations. In addition, the department did not include a requirement that the MECC assess the costs associated with the JPME program or any recommendations resulting from the study. The MECC’s report acknowledges that the resource-limited environment facing the Department of Defense will make it difficult to sustain current practices, such as maintaining small class sizes and small student-to-faculty ratios, at the intermediate and senior courses or to consider additional requirements for the lower-level courses. The report states that leveraging new learning approaches, such as increased distance learning options, will not provide an inexpensive or free solution for increased joint education requirements. However, the MECC did not analyze or consider either the costs associated with implementing the study’s recommendations or efficiencies to be derived from implementing the study’s recommendations. Of the MECC’s 21 recommendations, the MECC identified five recommendations requiring further study and other possible investments, which suggests a potential need for additional funding resources to conduct those studies and implement any findings. For example, the report included recommendations to (1) conduct a study to identify and evaluate potential educational tools to achieve certain desired leader attributes, and (2) consider and explore opportunities to incentivize and reward lifelong learning. The costs associated with these recommendations are unclear and officials responsible for the study could not provide us with cost estimates. Prior studies of professional military education also did not or were unable to fully identify the costs associated with the overall JPME program, or with individual JPME programs. The Skelton Panel, for example, inquired into the cost per student at each school. The Office of the Secretary of Defense provided the panel with raw data produced with different methodologies by service, and sometimes by school, which resulted in widely varying costs for roughly similar programs. More recently, the House Armed Services Subcommittee for Oversight and Investigations made its own effort to ascertain whether a uniform cost accounting system existed for DOD’s professional military education system. The Department provided cost-per-student figures with standardized criteria. Comparative figures were no longer characterized by such enormous variations, but there were still a number of unexplained differences in the data provided by the Office of the Secretary of Defense. Our 2004 Training Guide states that agencies should strategically target training and development investments to help ensure that resources are not wasted on efforts that are irrelevant, duplicative, or ineffective. Our Training Guide further recommends that agencies should consider the appropriate level of investment and prioritize funding so that the most important training needs are addressed first. We have previously concluded that leading practices such as accounting for program resources enables managers to better manage existing resources and plan for future programmatic needs. Further, accounting for program resources is becoming increasingly important for decision makers within the department who must make difficult trade-off decisions in a sequestration budget environment. According to officials, the MECC omitted any and all cost elements from the study so as to not limit their assessment of the department’s needs for educating its leaders for Joint Force 2020 based on resources. Officials told us that they did not want to eliminate options for change based on the cost associated with them. However, by not assessing the cost of the MECC’s recommendations, as well as any efficiencies that may be achieved through their implementation, DOD may not be in a position to know if it has the resources to implement them or whether efficiencies will be achieved, and, without plans to assess cost in the near term as part of continued efforts to implement the results of the JPME study, decision makers may not know if implementing their recommendations will be cost effective. The experiences of the last 12 years of operations in Iraq and Afghanistan have underscored the importance of U.S. military officers’ ability to work jointly across the services and reinforced the need for an effective joint military education program. DOD’s recently completed study of JPME is an important step forward as DOD considers how to adapt its joint military education program to reach the goals it laid forth for Joint Force 2020. Nonetheless, some questions remain about how well its findings can be applied to the task of revising the department’s officer and enlisted JPME policies. Specifically, ensuring transparency and taking certain actions to ensure this study’s findings and recommendations can be utilized effectively is important. For instance, without establishing milestones and timeframes for conducting follow-on actions, and involving necessary stakeholders and developing an implementation approach that provides for accountability, the department may not be assured that the study’s findings and recommendations will be incorporated into the JPME program at institutions across the department and the services in a timely manner. Additionally, without reliable information on the costs of implementing the study’s recommendations, decision makers could be hindered in determining the most efficient allocation of departmental resources for JPME. DOD’s JPME program will continue to play a role in the future development of departmental leaders, and acting to help ensure the findings and recommendations of this and any future studies are both utilized and cost-effective could allow the department to make the best use of the resources it devotes to this program. To guide the implementation of actions DOD identified in its study on JPME, we recommend that the Chairman of the Joint Chiefs of Staff direct the Director for Joint Force Development to take the following two actions: establish well-defined timeframes for conducting follow-on actions, coordinate with all stakeholders, and identify key officials responsible for implementing the study’s recommendations to help ensure the usefulness, timeliness, and implementation of any actions DOD takes in response to the findings and recommendations contained in its study, and assess the costs of implementing recommendations made and efficiencies to be derived from the recommendations in order to implement DOD’s recommendations in a cost-effective manner. In written comments on a draft of this report, DOD concurred with our two recommendations to guide implementation of the actions that DOD identified in its review of joint education. DOD’s comments are reprinted in appendix IV. DOD also provided technical comments on the draft report, which we incorporated as appropriate. Regarding our first recommendation that DOD establish well-defined timeframes for conducting follow-on actions, coordinate with all stakeholders, and identify key officials responsible for implementing the study’s recommendations, DOD stated that, as the Joint Staff pursues implementation of the recommendations in its review, it will establish timelines, identify offices of primary responsibility, and coordinate with stakeholders as appropriate. DOD also noted that efforts are already underway to update the Chairman of the Joint Chief of Staff’s professional military education policies to ensure that these policy directives are consistent with the results and recommendations of the recent study. Regarding our second recommendation that DOD assess the costs of implementing recommendations made and efficiencies to be derived from the recommendations, the department again concurred, stating that, although assessing the costs of its recommendations is beyond the original scope and purpose in this study, it will consider costs and efficiencies prior to moving forward with the implementation of any recommendations. It also stated that a number of recommendations will be implemented through policy changes which are cost neutral. Further, it stated that the services will also consider such costs and efficiencies for recommendations under their purview. We are sending copies of this report to the appropriate congressional committees, the Chairman of the Joint Chiefs of Staff, and the Directorate for Joint Force Development. In addition, this report will also be available at no charge on the GAO Web site at http://gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To identify the purpose of the Department of Defense’s (DOD) study of the Joint Professional Military Education (JPME) program, we reviewed relevant DOD guidance and other documents related to the study, including the final report dated June 24, 2013, and met with knowledgeable officials. We reviewed the Chairman of the Joint Chiefs of Staff’s July 16, 2012 white paper that called for and outlined the general focus of the study. We also reviewed the Director for Joint Force Development’s (J7) memorandum that was issued with the white paper and clarified the overall intent of the study and specific questions to help frame the study, and directed responsibility of the study to the Military Education Coordination Council (MECC). We analyzed the MECC’s meeting minutes and interim briefing report, which provided insight into how the MECC addressed the questions posed in the Director’s memorandum, as its approach differed somewhat from the framework provided in the memorandum. We then compared the aforementioned guidance documents and interim report findings to the final report to assess consistency. In addition, we met with knowledgeable officials from the Joint Professional Military Education Division within DOD’s Joint Staff’s Directorate for Joint Force Development who, in conjunction with the MECC, were responsible for guiding DOD’s effort. We also met with the officers of primary responsibility and/or team leads responsible for carrying out the various elements of the study. These meetings helped to ensure our understanding of the purpose and objectives of the study. We reviewed the final report section that focused primarily on the study’s findings and recommendations for change to gain an understanding of the MECC’s findings and recommendations. Further, we met with knowledgeable officials within the Directorate for Joint Force Development and the team lead for the findings and recommendations portion of the study to gain clarification and insight into the major takeaways from the study, areas in need of change, and any plans for the way ahead. To assess the methodology DOD used to conduct the joint professional military education study and its planning for follow-on actions, we reviewed the MECC’s final report and supporting documents. We interviewed DOD officials from the Joint Professional Military Education Division within the Directorate for Joint Force Development and officials from across DOD who were identified as officers of primary responsibility or team leads for specific study tasks associated with the study’s two objectives. We reviewed relevant reports and guidance by GAO; the Office of Management and Budget; the Office of Personnel Management; and private sector organizations such as the RAND Corporation, among others, to identify select leading practices for evaluating training, educational, and other programs. The following list includes select sources we consulted for leading practices, in addition to those listed throughout the report, and against which we assessed DOD’s JPME study: GAO, GAO Schedule Assessment Guide: Best Practices for Project Schedules, GAO-12-120G (Washington, D.C.: May 2012), GAO, Designing Evaluations: 2012 Revision, GAO-12-208G (Washington, D.C.: January 2012), RAND Corporation, Standards for High-Quality Research and Analysis (Santa Monica, Calif.: November 2011), The Office of Personnel Management, Training Evaluation Field Guide (Washington, D.C.: January 2011), GAO, Government Auditing Standards: 2007 Revision, GAO-07-162G (Washington, D.C.: January 2007), GAO, Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government, GAO-04-546G (Washington, D.C.: March 2004), The Office of Management and Budget, Circular A-94, Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs (Washington, D.C.: Oct. 29, 1992), and GAO, Designing Evaluations, GAO/PEMD-10.1.4 (Washington, D.C.: March 1991). We identified key practices contained in these documents for evaluating educational programs and information that were applicable for the purposes of this engagement. We then compared DOD’s methodology with these select leading practices for program evaluation. We also reviewed documents provided to us and identified by DOD officials as project plans for the study, as well as documentation of the department’s intended follow-on actions. We conducted this performance audit from March through October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The MECC made a total of 21 recommendations, which collectively address the study’s two objectives and span four categories, including: (1) desired leader attributes, or educational outcomes; (2) joint education continuums; (3) lifelong learning and advancements in learning technologies; and (4) faculty quality. Several of these recommendations call for additional study by the National Defense University and the services to support the achievement of leader attributes. Other recommendations emphasize the importance of strengthening the educational outcomes at the primary level of joint education, making education programs more accessible, and using prior learning assessments to tailor education opportunities to students’ needs, among other things. Table 2 lists the recommendations by category. Brenda S. Farrell (202) 512-3604 or farrellb@gao.gov. In addition to the individual named above, David Moser, Assistant Director; Richard Burkard; Timothy Carr; Gustavo Crosetto; Melissa Emrey-Arras; Jennifer Madison; Terry Richardson; Jennifer Weber; and Erik Wilkins-McKee made key contributions to this report.
To facilitate unified operations across the services, DOD has provided JPME programs at departmental and service academic institutions for almost 30 years. In July 2012, the Director for Joint Force Development, who reports to the Chairman of the Joint Chiefs of Staff, tasked the MECC to review DOD's joint education objectives and institutions to help ensure that outcomes match requirements for the strategic environment projected for Joint Force 2020. Subsequently, the National Defense Authorization Act for Fiscal Year 2013 mandated GAO to report to Congress on the analytical approach used by the MECC not later than 90 days after the Director submitted the MECC's report to GAO, which the Director did on July 1, 2013. In this report, GAO (1) identifies the purpose of DOD's study of the JPME program, and (2) assesses DOD's methodology used to conduct the Joint Professional Military Education study and its planning for follow-on actions. GAO analyzed the MECC's final report and relevant planning documents, interviewed DOD officials who conducted portions of the study, and reviewed leading practices for evaluating programs by GAO and other entities. The purpose of the Department of Defense's (DOD) study of its Joint Professional Military Education (JPME) program was to identify (1) desired leader attributes as part of the JPME career-long learning experience needed to support DOD's strategic vision and (2) any gaps in the current educational program to facilitate the development of the leaders needed to achieve that vision. Specifically, a Military Education Coordination Council (MECC)--following direction from the Chairman of the Joint Chiefs of Staff and the Director for Joint Force Development--proposed six desired leader attributes, including, for example, the ability to anticipate and respond to surprise and uncertainty, and concluded that the existing institutional structure for providing JPME should be retained. The MECC's gap analysis, however, indicated that in order to support the development of these attributes, a greater emphasis on career-long self-directed learning is also needed, among other things. In line with its findings, the MECC made 21 recommendations to improve the JPME program that address increased accessibility of educational programs, changes to teaching methodologies and assessment mechanisms, and enhanced use of technology. DOD's methodology generally included leading practices for assessing training programs, but DOD has not yet fully planned for follow-on actions and engaged all stakeholders, nor has it assessed the costs of the MECC's recommendations to provide decision makers with more timely and complete information and help ensure that the study's results are implemented. Specifically, the MECC reviewed other related studies, conducted a gap analysis to identify gaps based on existing and future needs, used the best available data and acknowledged limitations--all practices identified by GAO and other government agencies and research institutions as leading practices for successful evaluations of training and other programs. By contrast, DOD documents state that the results of the study were intended to inform and shape the fall 2013 academic year, but the MECC did not complete its study until June 24, 2013 and provided its report to GAO on July 1, 2013. Further, DOD has not yet identified milestones and timeframes for implementing all of its recommendations. Subsequently, the department developed an update to inform actions for moving forward on its recommendations. DOD identified target dates for completion not later than September 2014 for 9 recommendations, but did not include interim milestones, and has not yet developed target dates for 12 recommendations. In addition, the MECC did not formalize plans to achieve the buy-in of all stakeholders for recommendation implementation. Without this information, it may be difficult for DOD to ensure that stakeholders agree on and are accountable for implementing the recommendations in a timely manner. Finally, the MECC did not analyze the costs or efficiencies associated with implementing its recommendations, but it identified 5 recommendations that could incur additional costs because they require further study. Leading practices such as accounting for program resources enable managers to better manage existing resources and plan for future programmatic needs. Without cost data on the study's recommendations or plans to assess cost in the near term as part of continued efforts to implement the results of the JPME study, decision makers could be hindered in determining the most efficient allocation of departmental resources for JPME. GAO recommends that DOD establish well-defined timeframes for conducting any follow-on actions and include stakeholders necessary for implementation, and assess the costs of implementing recommendations made in the MECC’s recent study of joint professional military education. DOD concurred with both of GAO’s recommendations.
CENTCOM, whose area of responsibility encompasses 27 countries in South and Central Asia, the Horn of Africa, the Arabian Peninsula, and Iraq began planning for OIF in late 2001 (see fig. 1). Starting in mid-2002, CENTCOM began to improve the U.S. military’s infrastructure in Kuwait. This included an expansion of fuel and port facilities to support the arrival of U.S. military units. Beginning in the fall of 2002, CENTCOM began to deploy troops to the OIF area of operation. These deployments continued up to, and beyond, the start of major combat operations in Iraq on March 19, 2003. According to the Defense Manpower Data Center, the number of military personnel deployed in CENTCOM’s area of responsibility in support of OIF and Operation Enduring Freedom steadily increased from over 113,000 in December 2002 to over 409,000 in May 2003 (see fig. 2). During major combat operations, over 280,000 U.S. military personnel were deployed in Iraq, Kuwait, and nearby Persian Gulf nations. All of the services were represented in the theater, but U.S. Army units formed the bulk of military personnel. After major combat operations were officially declared over on May 1, 2003, the total number of personnel in CENTCOM’s area of responsibility began to gradually decrease. However, U.S. and coalition forces continue to conduct stabilization operations in Iraq and DOD increased the number of military personnel in Iraq to support the elections in January 2005. CENTCOM’s command authority over units deployed to its area of responsibility allows it to direct all necessary military actions to assigned military forces, including units deployed in both Operation Enduring Freedom in Afghanistan and OIF. Command authority also provides the geographic combatant commander with directive authority over logistics. The services and other defense components, however, share the responsibility of supplying U.S. forces. The directive authority gives the combatant commander the ability to shift logistics resources within the theater, but logistics support outside of the area of responsibility is usually dependent on the services. The combatant commander relies on the services’ logistics components, such as the U.S. Army Materiel Command (AMC), the U.S. Marine Corps Logistics Command, and DLA to purchase supplies and manage inventory. AMC has major subordinate commands that manage supply inventories, such as the Communications-Electronics Command (CECOM) and the Tank-automotive and Armaments Command (TACOM). DLA has a role in packaging supplies for shipment, while the U.S. Transportation Command is responsible for delivering them to theater. The combatant commander assigns one service as the lead for logistics support, including transportation, in the theater. During OIF, the Army was the lead service for logistics support. The military services rely heavily on their specifically designated war reserve stock, including weapon systems, equipment, and spare parts, to equip units when they first arrive in a theater of operation. The Army’s war reserves consist of major items including trucks and secondary items such as spare parts, food, clothing, medical supplies, and fuel. Spare parts have the largest dollar value of the Army’s secondary items. War reserves are protected go-to-war assets that are not to be used for purposes such as improving peacetime readiness or filling unit shortages. Some of these assets are located in Southwest Asia, the Pacific, Europe, and on special war reserve ships. War reserves are funded through direct congressional appropriations that are requested in the services’ annual budget submissions. AMC is responsible for determining the Army’s requirements for war reserve spare parts. To do this, AMC officials use a computer model—the Army War Reserve Automated Process system. The model uses DOD planning guidance and Army information on anticipated force structure including a list of the end items and associated spare parts. For each end item or part, the model uses data on expected usage and spare parts consumption rates based on breakage, geography, environment, and rates of equipment loss due to battle damage. The Marine Corps Logistics Command and logistics planners from Marine operational units are responsible for determining annually the adequacy of war reserve stock based on current operational plans. Once this is determined, planning officials confirm the availability of the supplies with DLA and other supporting logistics agencies that manage individual items. DOD forecasts operational requirements for spare and repair parts differently than it does for items that result from rapidly emerging needs. DOD forecasting methods for spare and repair parts vary by service. The Army normally uses a computer model to forecast its spare and repair parts requirements based on the average monthly demand over the previous 24 months. The Marine Corps also uses models and involves operational and logistics planners at several levels of command to validate their forecasted requirements. Operational requirements to support rapidly emerging needs, such as Interceptor body armor and up-armored High-Mobility Multi-Purpose Wheeled Vehicle (HMMWV), are developed outside of normal supply forecasting procedures. They are identified through operational needs statements from the theater that are validated and resourced by the Army. Units in theater submit operational needs statements for the items, which are combined by their higher headquarters into theater requirements. The Coalition Forces Land Component Command communicates these requirements to the Department of the Army, where they are validated and resourced by offices of the Assistant Secretary of the Army (Financial Management and Comptroller), the Deputy Chief of Staff for Operations, the Deputy Chief of Staff for Program and Analysis, and the Deputy Chief of Staff for Logistics and eventually transmitted to the program manager. Generally, supplies and equipment for customers—including military units and DOD agencies—are purchased using defense working capital funds or procurement funds. Of the nine items we examined, seven are purchased using defense working capital funds (assault amphibian vehicle (AAV) generators, armored vehicle track shoes, chemical-biological suits, lithium batteries, Marine Corps helicopter rotor blades, Meals Ready-to-Eat (MRE), and tires). Working capital fund managers at the logistics agencies obligate and spend funds to purchase supplies from manufacturers and repair items to build up inventories in anticipation of sales. Military units then order supplies from the logistics agencies. When the requisitioned supplies are delivered, the units pay for them with funds that are appropriated annually by Congress. The logistics agencies then use this revenue to pay the manufacturers and to cover their own operating costs. Several funds make up the defense working capital funds, including the Army Working Capital Fund, the Navy Working Capital Fund (which finances Marine Corps managed items), and the Defense-wide Working Capital Fund (which finances Defense Logistics Agency-managed items). AMC uses the Army Working Capital Fund to purchase and maintain supplies. The remaining two items (Interceptor body armor and up-armored HMMWVs) are purchased with procurement funding because they are still in the process of initial issuance. Procurement funds are used to pay for such expenses as the purchase of weapons and weapon components, communication and support equipment, munitions, initial and replenishment spare parts, and modernization equipment. Project managers for these items receive congressional appropriations to fund purchases of additional supplies. DOD guidance defines distribution as the operational process of synchronizing all elements of the logistics system to deliver the “right things” to the “right place” at the “right time” to support the combatant commander. These elements include physical, financial, information, and communication networks, which can be divided into two general categories—the actual movement of supplies (physical networks) and the use of information technology (financial, information, and communication networks) to support distribution system activities. Table 2 lists the primary DOD regulation and joint doctrine that guide the distribution process. In its guidance, DOD identifies eight fundamental principles of theater distribution: 1. Centralized management: Identify one manager who is responsible for distribution, visibility, and control of items in the theater distribution system. 2. Optimize the distribution system: Give distribution managers the ability to maintain visibility in locations under their control and provide them with resources to meet changing requirements. 3. Velocity over mass: Substitute speed and accuracy for large investments in inventory. 4. Maximize throughput: Reduce the number of times that supplies must be opened and sorted. 5. Reduce customer wait time. 6. Maintain minimum essential stocks: Reduce reliance on large, costly stockpiles. 7. Maintain continuous, seamless, two-way flow of resources: Apply the distribution principles to maintain an integrated and seamless distribution system. 8. Achieve time definite delivery: Deliver the right supplies to the combatant commander at the right place and time. Figure 3 illustrates the complexity of DOD’s joint distribution system. The system moves supply items from inventories, vendors, and repair facilities in the U.S. to deployed units in the theater. The system also returns items to the U.S. for repair and maintenance. Several DOD components have recently commissioned assessments of logistics operations during OIF with the aim of identifying areas for improvement. One study, Objective Assessment of Logistics in Iraq, commissioned by the Deputy Under Secretary of Defense (Logistics and Materiel Readiness) and the Director for Logistics, Joint Staff, was published in 2004 by the Science Applications International Corporation (SAIC). This study identified problems with DOD’s logistics support and contained several recommendations that were endorsed by the Under Secretary of Defense (Acquisition, Technology, and Logistics) such as synchronizing the distribution chain from the U.S. to CENTCOM’s area of responsibility and resolving issues with technology. Another study, commissioned by the Army Deputy Chief of Staff for Logistics, is an independent assessment of the Army’s logistics experience in OIF by the RAND Corporation’s Arroyo Center. This study focuses on how Army forces were sustained and the performance of the sustainment system during combat operations and initial stability and support operations. RAND’s report is currently under review by the Army. During OIF, DOD was responsible for moving over 2 million tons of supplies and spare parts to theater. U.S. troops experienced shortages of seven of the nine items GAO selected for review. According to the 2004 National Military Strategy, U.S. forces expect to have sufficient quantities of the right items at the right time. However, demand for the seven items exceeded availability sometime between October 2002 and September 2004. The overall impact of these shortages on military operations is difficult to quantify because it varied between combat units and is not always apparent in DOD’s readiness systems. The remaining two items that we examined did not experience shortages in theater. Detailed case studies for the nine items are in appendixes II-X. U.S. troops in the OIF theater did not always have sufficient quantities of seven items that we reviewed. For some items, the shortages occurred primarily during initial troop deployments and major combat operations in early 2003; for other items, shortages emerged during the sustainment period after major combat operations were declared over in May 2003. The overall impact is difficult to quantify because it differed between units. For example, while units in the 3rd Infantry Division reported that tire shortages affected their mission by forcing them to abandon equipment, the 4th Infantry Division reported that their tire shortages had no affect on their mission. The following describes the shortages for each of the seven items. Generators for AAVs. Marine Corps units in Iraq experienced shortages of generators for their AAVs during deployment and combat operations in early 2003. The AAV is a landing vehicle that the Marine Corps used as an armored personnel carrier in Iraq. Without the generator, which provides electric power, the AAV cannot operate. Although 140 generators were reported shipped from the U.S., Marine forces in theater reported receiving only 15. Neither we nor the Marine Corps could find the remaining 125 in the supply system. While the service did not document any operational impacts specifically due to generator shortages, its forces had to strip parts from about 40 nonoperational vehicles to maintain the operational capabilities of other vehicles. Track shoes for Abrams Tanks and Bradley Fighting Vehicles. As the conflict in Iraq continued, track shoes, essential components of combat vehicles such as Abrams tanks and Bradley Fighting Vehicles, were not available to meet increasing demands. Although sufficient quantities of track shoes existed to meet demand at the beginning of combat, by May 2003 actual demand was 5 times the forecasted demand primarily because of the heavy wear and tear on track shoes as a result of high mileage, poor road conditions, and extreme desert heat. Major combat units reported that significant shortages of track shoes negatively affected their operational capabilities. For example, the 3rd Infantry Division reported in June 2003 that 111 (60 percent) of its 185 Abrams tanks were unable to perform their missions because of supply shortfalls that included track shoes. Furthermore, 159 (67 percent) of the division’s 237 Bradley Fighting Vehicles were not mission capable for the same reason. Interceptor Body Armor. Demand for Interceptor body armor exceeded supply during OIF. The Coalition Forces Land Component commander decided to increase individual protection by issuing the armor to all troops and civilians. As a result, demand for the body armor surged, with quarterly demand rising from a pre-war level of about 8,600 vests and 9,600 plates, to about 77,000 vests and 109,000 plates by the time the war commenced on March 2003. Back orders for plates peaked at 598,000 in November 2003, while back orders for vests reached 328,000 in December. Even though the services did not report that the lack of body armor impacted their missions during OIF, there were serious concerns. For example, combat support units in the Army and Marine Corps were among the last to be equipped with the armor, increasing the risk to personnel given the enemy’s focus on attacking supply routes. Lithium batteries. Army and Marine Corps forces operating in Iraq experienced severe shortages of lithium batteries, particularly BA-5590s and BA-5390s, during major combat operations in the spring of 2003. These nonrechargeable batteries power some 60 critical communications and electronics systems, such as radios and missile guidance systems. Worldwide demand for these batteries surged from a peacetime average of below 20,000 per month prior to September 11, 2001, to a peak rate of over 330,000 in April 2003. As a result, the number of back orders rose rapidly to 900,000 by May 2003. According to Marine Corps officials, if the war had continued at the same pace into May 2003 or beyond, Marine units would have experienced degraded communications capability and increased risk as a result of battery shortages. MREs. U.S. forces in Iraq experienced shortages of MREs primarily during the deployment and major combat phases in February, March, and April 2003 before normal dining facilities were established. The peak monthly demand for MREs rose to more than 1.8 million cases, while inventories dropped to a level of 500,000 cases. In late April when other food options became available, demand fell rapidly. While certain Army units reported running out of MREs, available data only shows that they came close. According to a RAND study, some Army units came within an estimated 2 days or less of exhausting their on hand quantities. Similarly, according to a Center for Naval Analysis study, at times Marine Corps combat support units had less than 1 day of MREs on hand. As a result, both Army and Marine Corps units were at risk of running out of food if supply distribution was hindered. Tires for 5-ton trucks and HMMWVs. The rising demand for truck tires during and after major combat operations in Iraq nearly exhausted existing inventories. The demand grew as vehicles were driven long distances and were modified with add-on-armor. For example, in August 2003, the Army’s inventory contained only 505 tires for 5-ton trucks, which fell far below the worldwide monthly demand of more than 4,800 tires. As a result, back orders spiked to over 7,000 for 5-ton truck tires and to over 13,000 for HMMWV tires. The shortages reduced the operational capabilities of these vehicles and negatively impacted operations in Iraq. For example, 3rd Infantry Division units reported that tire shortages forced them to abandon equipment, and Marine Corps units reported stripping and abandoning otherwise good vehicles because of a lack of tires. Up-armored HMMWVs and add-on-armor kits. Since the U.S. military began identifying requirements for these vehicles during the summer of 2003, there has been a gap between the number of vehicles required and the number being produced by the industrial base. This new requirement was based on the need to protect soldiers and Marines executing distribution and force protection missions. The up-armored HMMWV provides enhanced protection against rifle rounds and explosive blasts while the add-on–armor kits provide some additional protection to previously unarmored vehicles. As of September 2004, only 5,330 of the 8,105 required vehicles were in theater. The overall impact of the shortages of up-armored HMMWVs and add-on-armor kits is difficult to measure because units do not report the direct effects of using unarmored HMMWVs. However, according to a Center for Army Lessons Learned study, the risk of harm to both personnel and equipment from improvised explosive devices is greatly reduced when they are transported in an up-armored HMMWV. Two items we examined—chemical-biological suits and Marine Corps helicopter parts—did not experience shortages. In these cases, although demands were high because of wartime operations, the defense supply system was able to meet the needs of deployed forces. A discussion of the availability of the two items follows. Chemical-biological suits. Although there was a perception that sufficient quantities of the new Joint Service Lightweight Integrated Suit Technology (JSLIST) chemical-biological suits were not available during OIF, our work did not identify any actual shortages in the theater of operations. Concerns about a shortage of chemical-biological suits arose as a result of an October 2002 Congressional request that DOD provide suits to all military and civilian personnel located in the OIF theater. However, according to DLA, the contracting agent for chemical-biological suits, and our analysis, there were sufficient quantities of the suits in the inventory to meet the suit demand during OIF. Marine Corps helicopter rotor blades. Although concerns were raised about shortages of helicopter parts for Marine Corps helicopters, specifically rotor blades, we did not identify any shortages in the theater of operations. Marine Corps officials reported there were no rotor blade shortages and our analysis of supply data confirms this. In addition, the mission capable rates during OIF for the two helicopters we reviewed—the UH-1N and the CH-53E—were comparable to peacetime rates. We were not always able to identify the impact of specific shortages because, although DOD’s supply system showed shortages of items in theater, DOD’s readiness reporting systems did not always show the impact of these shortages on unit operational capability. Such systems as the Global Status of Resources and Training System and Unit Status Reports are intended to identify the ability of units to accomplish their missions and the problems affecting mission performance each month. In addition, other reporting mechanisms, such as lessons learned reports or after-action reports, may also disclose the impact of shortages on operations but do not tie directly to readiness reporting. As a result, we used a variety of documents, some obtained directly from the units, to identify the impact of supply shortages. For our nine items, the information reported through the various readiness systems, in some cases, was inconsistent with the impact cited in reports. For example, in July 2003, unit status reports from the 4th Infantry Division’s battalions showed that approximately 145 to 150 of their 176 Bradley Fighting Vehicles were mission capable, translating to a mission capability rate of around 84 percent. However, a May 2004 lessons learned report prepared by the division indicated that the overall mission capability rate for its Bradleys was 32 percent during the July 2003 time frame and that the degraded status was due to a shortage of armored vehicle track shoes and other vehicle suspension items. In a June 2003 status report, four of the 3rd Infantry Division’s five infantry battalions reported that 65 percent of their Bradley vehicles were mission capable. However a 3rd Infantry Division report for June 2003 showed that 65 percent of the division’s Bradleys were non-mission capable because of supply-related reasons, which unit officials attributed almost exclusively to track shoe shortages. There were also instances of readiness information about unit status in Iraq not being reported. For example, in August 2003, the 4th Infantry Division’s five Armor battalions and one Calvary unit did not enter any mission capability data into the readiness reporting systems about the status of their 247 Abrams tanks. However, the May 2004 briefing report prepared by the division indicated that by July 2003, 28 percent of the division’s tanks were non-mission capable. The primary reason given was lack of tank shoes and related suspension parts. Five deficiencies contributed to shortages in the supply of seven of the nine items that we studied. According to DOD data and contractor studies, these deficiencies also affected other items in the supply system. The deficiencies were (1) inaccurate and inadequately funded Army war reserve requirements, (2) inaccurate supply forecasts, (3) insufficient and delayed funding, (4) delayed acquisition, and (5) ineffective distribution. Table 3 identifies the deficiencies that affected each of the seven supply items. The inaccurate requirements for and poor funding of war reserves affected the availability of three of the supply items (armored vehicle track shoes, lithium batteries, and tires). Annual updates of the Army’s war reserve requirements for supply items have not been conducted and, as a result, the Army did not have an accurate estimate of the spare parts and other items needed for a contingency such as OIF. In addition, over the past decade, the Army underfunded its war reserve spare parts, which has forced managers to allocate money for certain items and not for others. Army officials told us that annual updates of its war reserve requirements for spare parts have not been conducted since 1999. AMC uses a computer model, called the Army War Reserve Automated Process, to determine its requirements for spare parts in the war reserve. This model is supposed to be run on the basis of annually updated defense planning guidance and is designed to support the latest war plans and Army force structure. According to AMC officials, the model has not been run since 1999 because the Department of the Army has not provided the latest guidance, which details the force structure and operations that the Army’s war reserve must support. However, an Army official provided us with copies of the Army guidance from 2001, 2003, and 2005 that AMC could have used to initiate computation of war reserve requirements. The Army official stated that AMC did not run the model for a variety of reasons such as support for ongoing missions and problems with the implementation of a new database and modeling system. Because the requirements were out-of-date, the war reserve inventories for some spare parts were inadequate and could not meet initial wartime demands. For example, the war reserve requirement for nonrechargeable lithium batteries (BA-5590s) was not sufficient to support initial operations in OIF. The requirement for BA-5590s was set at 180,000 to support the first 45 days of operations, but this amount was considerably lower than the actual demand of nearly 620,000 batteries recorded during the first 2 months of the conflict. CECOM officials attributed this mismatch to inaccurate battery failure and usage rates in the 1999 model. They also said the model did not include all the communications systems that used nonrechargeable lithium batteries. In another example, the war reserve requirement for track shoes for armored vehicles was inadequate to keep pace with actual demand during the early months of combat. As table 4 shows, the pre-OIF war reserve requirement for track shoes for Abrams tanks and Bradley Fighting Vehicles was, respectively, 5,230 and 5,626; however, in April 2003 the actual worldwide demand for these tanks was four times higher. The situation was even worse for 5-ton truck tires. Since the end of major combat operations, war reserve managers have made manual adjustments to the requirements to reflect supply experiences from OIF. For example, officials told us that they have adjusted the Army and the Marine Corps war reserve requirement for BA-5590 and BA-5390 lithium batteries upward to more than 1.5 million batteries based on the average monthly demand of 250,000 batteries experienced during OIF multiplied by 6 months. Similarly, war reserve managers at TACOM have increased the requirements for Abrams and Bradley tank track shoes to 32,686 and 34,864, respectively. While these actions may correct a particular problem, they do not address the systemic inaccuracy of the Army’s war reserve requirements. In prior reports, we have identified problems with the Army’s process for computing the war reserve spare parts requirements. In a 2001 report, we recommended that the Secretary of Defense direct the Secretary of the Army to develop and use the best available consumption factors; improve the methodology used to determine requirements by considering planned battlefield maintenance practices; and develop industrial-base capacity on current, fact-based estimates for the war reserve. In a 2002 report, we recommended that the Secretary of Defense direct the Secretary of the Army to have the Commander of AMC change its process of calculating war reserve requirements. While the Office of the Secretary of Defense concurred with these recommendations, the Army has yet to implement them. The Army’s war reserve requirements for spare parts have been significantly underfunded for many years, indicating that the Army has made a risk management decision to not fully fund them. In November 1999, Army documents indicated that the Army had only $1.3 billion in parts prepositioned, or otherwise set aside for war reserves, to meet its stated requirement of $3.3 billion. AMC data indicate that a lack of funding for war reserve spare parts continues to be an issue. As table 5 shows, as of October 2004, only about 24 percent ($561.7 million out of $2,327.4 million) of AMC’s total spare parts requirement is currently funded and on hand. Moreover, AMC data show this pattern of underfunding is expected to continue through fiscal year 2009. As a result of this low funding, war reserve managers told us that they must prioritize how the available funding is allocated based on their professional experience. For example, the war reserve manager for TACOM reported that he tends to spend his available funds on expensive items with long production lead-times, such as tank engines, because they are difficult to acquire on short notice. Conversely, lower cost items with shorter production lead-times, such as tires, do not receive funding priority. The Army accepts the risk of unfunded war reserve requirements in order to fund other priorities, such as operations and the procurement of new systems. Although, the Army has reported the amount of war reserve underfunding to Congress, the risk of not funding the war reserve is not clearly stated. The Army’s requirements process did not accurately forecast the supplies needed to support U.S. forces during OIF for four of the items we studied (armored vehicle track shoes, lithium batteries, MREs, and tires). As indicated by Army regulation, AMC normally uses a computer model to forecast its spare and repair parts requirements. The model uses the average monthly demand over the previous 24 months to predict future equipment use and demand. Although the Army regulation indicates that the model should be able to switch to a wartime demand forecasting method, AMC officials stated the system has no wartime forecasting capability. As a result, the Army’s supply requirements forecasting model could not forecast the wartime surge in requirements during OIF. In contrast, DLA’s model has the capability to forecast requirements for contingencies, but it was not completely effective, as in the case of MREs. Instead of using the model, the Army relied on the expert opinion of item managers, who manage supply items at AMC’s subordinate commands. Item managers, however, did not have sufficient information on estimated deployment sizes or duration and intensity of operations to accurately forecast supply requirements for OIF. According to TACOM officials, AMC initially directed item managers to use their expert opinion and knowledge to develop forecasts, without input from operational planners in CENTCOM. AMC officials stated that Army headquarters did not provide them with formal guidance on the duration of the conflict, supply consumption, or size of the deploying force. AMC documents show and their contractors confirm that AMC gathered some anecdotal information on force size and the duration of operations in November 2002. However, item managers at AMC’s subordinate commands reported they did not always receive adequate guidance from AMC. For example, officials at TACOM stated they did not receive planning guidance on operational plans from AMC to incorporate into their forecasts of track shoe or tire requirements. The forecasted monthly requirement for Abrams track shoes was 11,125, which was less than half of the actual requirement of 23,462 shoes in April 2003. Forecasts for 5-ton truck tires were also inaccurate. Worldwide demand was forecasted at 1,497 tires during April 2003, less than a third of the actual demand of 4,800. In contrast, officials at CECOM reported that in the summer of 2002, operational planners consulted them about the number of nonrechargeable lithium batteries needed to support operations. Subsequently, CECOM officials presented these new requirements to AMC and the Joint Materiel Priorities and Allocation Board and received $38.2 million in additional obligation authority for BA-5590 and BA-5390 batteries. Despite these efforts, demand for batteries outpaced production during OIF combat operations. The Marines forecast supply requirements for their initial operations based on operational plans and modeling factors that involve both operational and logistics planners. Modeling factors include historical supply data, number of personnel involved in an operation, distance of operation, and number of days of operation. Normally, the forecasting process includes many echelons of Marine Corps command. Initially, the 1st Marine Expeditionary Force headquarters provides operational plans to the deploying units that determine the supply requirements for an operation. Once the deploying units forecast a supply requirement, it is returned to headquarters for review. Deploying units review the supply requirements again before passing them to the Marine Forces Pacific Command for final assessment. The Marine Corps Logistics Command, the service’s integrated logistics manager, sends the requirements to DLA and other supply providers. Supply providers then inform the Logistics Command and the deploying units about their ability to fill the forecasted requirements. According to the Marines, they used this process to forecast supply requirements before deploying to OIF. After the end of major combat operations, the Marine Corps began an after- action review process to analyze the effectiveness of their OIF supply forecast. As part of this analysis, the 1st Marine Expeditionary Force assessed the correlation between supply forecasts and supply usage. This analysis showed that 88 percent of the types of repairable supply items forecasted were actually needed, and 62 percent of the types of consumable items forecasted were demanded, in the first 90 days. These data indicate that a significant number of unneeded items could have been sent to theater, placing an unnecessary burden on the distribution system. However, the Marine Corps has not analyzed the accuracy of whether the quantities forecasted equaled the quantities needed. Without such analysis, the Marines cannot determine if their forecasting process provided them with the right items in the right quantity at the right time during OIF. A lack of sufficient funding (obligation authority) within the Army’s working capital fund and delays with the release of funds limited the availability of three reviewed items (armored vehicles track shoes, lithium batteries, and tires). The delays may have been due to the Army’s multi-stage process to validate requirements. In contrast, DLA’s working capital fund was able to move sufficient obligation authority among accounts to support rapidly increasing demands. According to a DOD regulation, DOD components are supposed to structure their supply chain processes and systems to provide flexible and prompt support during crises. However, during fiscal year 2003, AMC did not promptly receive obligation authority to match its stated requirements. The Department of the Army released $2.9 billion of obligation authority into the Army Working Capital Fund to buy supplies in October 2002 as part of the fiscal year 2003 budget cycle (see fig. 4). This amount was based on the requirements established in the President’s Budget prepared 2 years earlier. By the time fiscal year 2003 began, AMC had identified a new supply requirement of $4.8 billion to support both peacetime operations and the ongoing global war on terrorism, but, the obligation authority provided in October 2002 did not support this revised requirement. While Army headquarters provided some additional funding to AMC, AMC increased its supply requirements again in March 2003 to $5.6 billion. However, the total obligation authority AMC received at that time equaled only $4.7 billion. AMC did not receive sufficient obligation authority to support the final validated requirements of nearly $6.9 billion until the end of fiscal year 2003, 4 months after major combat operations ended. In addition to establishing requirements to support its peacetime and continuing global war on terrorism operations, AMC identified additional requirements, called reset requirements, of $1.3 billion to support the repair of items coming back from theater. As figure 4 shows, by the end of fiscal year 2003, AMC had received $6.9 billion of its total requirement of $8.2 billion (including reset) in obligation authority, a shortfall of $1.3 billion. We could not determine exactly why sufficient funding was not provided to AMC more quickly, because sufficient records were not available to track when AMC and its subordinate commands requested additional funding from Army headquarters or the amounts requested. Similarly, Army headquarters could not provide information about the timing of its requests for additional obligation authority to the Office of the Under Secretary of Defense (Comptroller). However, the Army’s requirements validation process likely contributed to delays in the release of obligation authority into AMC’s Army Working Capital Fund. After AMC completes its internal validation process and requests additional funding above the programmed requirement during the year of execution, several organizations within Army headquarters reexamine AMC’s requirements. In the absence of specific direction in Army regulations, Army headquarters has developed a process for validating AMC’s requirements. While the Office of the Deputy Chief of Staff for Logistics has the main responsibility for validating AMC’s functional requirements, the Office of Deputy Chief of Staff for Operations, the Office of the Deputy Chief of Staff for Program and Analysis, and the Assistant Secretary of Army (Financial Management and Comptroller) must also review and agree on the requirements. Once these organizations validate the requirements, the Assistant Secretary of the Army (Financial Management and Comptroller) requests obligation authority from the DOD Comptroller. This lengthy process may have delayed the release of funds and contributed to shortages of tires, track shoes, and lithium batteries. The additional time associated with the Army’s validation process was exemplified by events during fiscal year 2003. AMC set its revised supply requirements to $4.8 billion in October 2002. However, the DOD Comptroller did not release the first additional obligation authority of $600 million until January 2003, 3 months later. Army headquarters office released its next increase of obligation authority of $1.1 billion in March 2003, for a total of $4.6 billion; nearly 6 months after AMC identified the initial requirement. Army officials were unable to tell us when they had submitted their requirements to the DOD Comptroller because they said they often submitted requests for additional obligation authority during the fiscal year informally via e-mails and telephone calls. Accordingly, detailed records were not kept. We were able to confirm that the time between the releases of obligation authority from the DOD Comptroller to the department of the Army did not take longer than two weeks. This indicates that for most of the 6 months, AMC’s request for additional obligation authority was somewhere in the Department of Army’s validation process. In addition to delays in receiving funds, AMC suffered from an overall lack of sufficient obligation authority for its major subordinate commands that contributed directly to shortages of tires, track shoes, and lithium batteries during OIF. Initial priority was to provide funding to the U.S. Army Aviation and Missile Command to support critical aviation systems and then to TACOM for tracked and tactical wheeled vehicles. Without prompt obligation authority, item managers could not contract for supplies in anticipation of customer demands. According to item managers, they need sufficient obligation authority in advance of customer demands in order to have sufficient supplies available. TACOM officials reported that the lack of adequate obligation authority prevented them from buying supplies, including tires and tank track shoes, in anticipation of demands. For example, in October 2002, TACOM item managers identified total requirements of nearly $1.4 billion, but only had authority to obligate approximately $900 million. By May 2003, TACOM’s requirements had increased to over $2.1 billion, but only $1.5 billion in obligation authority was available. By the end of the fiscal year, TACOM’s total requirement, including funds necessary to reset the force, totaled $2.7 billion, but its obligation authority was less than $2.4 billion. This shows that obligation authority for tires and tank track shoes lagged behind requirements, thereby preventing item managers from buying supplies in anticipation of demand. Similarly, CECOM reported that unfunded requirements over several prior years affected its purchases of lithium batteries. CECOM identified requirements of nearly $1.5 billion for fiscal year 2003, but received less than $1.1 billion in obligation authority for the year. In contrast to the Army, DLA received sufficient obligation authority from the DOD Comptroller to meet increasing customer supply needs during OIF. DLA set the requirements for its supply management business area at $18.1 billion and received $16.5 billion of this amount in obligation authority in October 2002 (see fig. 5). By February 2003, it had received obligation authority that kept pace with increasing requirements. As requirements increased during OIF, the agency was able to request and receive additional obligation authority to purchase supplies in anticipation of customer needs. By the end of the fiscal year, DLA’s supply requirements had reached $25.0 billion, and it had received an equal amount of obligation authority to meet those requirements. DLA officials told us they were able to obtain timely increases in obligation authority from the DOD Comptroller because of their requirements validation process. The agency conducts an ongoing review of its requirements throughout the year to ensure they are updated as changes in demand occur. Agency officials then request additional obligation authority directly from the DOD Comptroller. This process allowed DLA to submit requirements and receive increased obligation authority several times during fiscal year 2003. Agency officials stated that having accurate requirements ensures that the DOD Comptroller accepts the requirements without further validation. According to DLA officials, the DOD Comptroller was responsive to the agency’s needs, providing additional obligation authority within days of a request. The supply system faced several challenges in rapidly acquiring three of the items we reviewed to meet the emerging demands caused by OIF (Interceptor body armor, lithium batteries, and up-armored HMMWVs and kits). According to a DOD regulation, supply chain processes and systems, which include relationships with commercial manufacturers, should provide flexible and prompt support during crises. The rapid increase in demand for the three items was not anticipated, and as DOD’s supply system attempted to purchase them, its efforts were impeded by problems that varied by item. For example, while lithium battery production was limited by the capacity of a sole source supplier and long production lead-times, body armor manufacturers faced shortages of the material components of the armor. DLA data show that manufacturers of body armor could not meet the surge in demand for the item’s two primary components, plates and vests. For example, the worldwide demand for plates increased from 9,586 in December 2002 to 108,808 in March 2003 to a peak of 478,541 in December 2003. A comparison of plate production rates over the same time period shows that only 3,888 plates were produced during December 2002, 31,162 during March 2003, and 49,746 during December 2003. Increasing requirements exceeded the manufacturer’s capacity to produce sufficient body armor. For example, in October 2003, CENTCOM expanded requirements for body armor to include all U.S. personnel in its area of responsibility. The industrial base could not meet this new requirement due to a lack of construction materials and long production lead-times. Vest production was hindered by the limited availability of Kevlar; plate production was initially slowed due to a lack of a special backing for the plates and later by the limited availability of the primary component of the plates, ceramic tiles. In addition, the minimum production lead-time of 3 months limited the manufacturers’ ability to accelerate production levels to meet increasing demand, especially for plates, which are more difficult to manufacture than vests. Due to these industrial-base limitations, body armor was not issued to all troops in Iraq until January 2004, 8 months after major combat operations were declared over. Demand for lithium batteries exceeded inventory and production during OIF. As mentioned earlier in this report, the requirement for lithium batteries had not been assessed for the war reserve for several years. Worldwide demand for lithium batteries increased from 38,956 batteries per month in October 2001 to a peak of 330,600 in April 2003. Concurrent battery production was 32,000 in October 2001 and 114,772 in April 2003, and thus was insufficient to meet demand. CECOM expanded battery production from one to three manufacturers and received $38.2 million to increase production capacity in late 2002; the 8-fold increase in battery demand still outstripped production capacity. A 6-month production lead-time for the batteries precluded the ability of the three manufacturers to meet the peak demand during April 2003. The Marines reported being down to only a 2-day supply of batteries in CENTCOM’s area of responsibility in April 2003, despite OIF’s priority on worldwide supply shipments. By late 2003, the Army was able to acquire enough batteries so that its inventory exceeded back orders. Meeting rapidly increasing requirements for armoring HMMWVs also met with constraints. For example, CENTCOM stated a requirement for 1,407 up-armored HMMWVs to support OIF in August 2003 that grew to 8,105 vehicles in September 2004. Manufacturers were producing 51 up-armored HMMWVs per month in August 2003. Recognizing the increase in requirements in February 2004, the Army reached an agreement with the manufacturers to increase production to 460 vehicles per month by October 2004. However, the signed agreement with the manufacturer indicated that maximum production could have been increased to 500 vehicles. Funding was not available when the requirements increased. As a result, Army officials stated that the up-armored HMMWV manufacturers were unable to operate at the maximum capacity. In order to produce vehicles at a consistent and high rate, the manufacturer needs to be assured of consistent funding at least 3 months in advance of delivery. While additional funding was received in fiscal year 2004, program managers stated they often did not know when the next funding release would occur, further complicating their procurement planning. As of September 2004, Army data shows that 5,330 of the 8,105 required up-armored HMMWVs were in CENTCOM’s area of responsibility. Similar issues affected the delivery of add-on-armor kits from the Army’s depot system even more dramatically. Kit production in the Army’s depot system reached its maximum rate of 3,998 kits per month in April 2004 and would have met the requirement sooner had the Army not slowed production. Moreover, additional unused capacity was available for kit production. In February 2004, a contractor-operated Army facility informed Army depot managers that it could produce an additional 800 kits per month. While item managers stated they did not use this contractor-operated facility due to issues with contract timing and price, they did not have information about the reason for reducing the level of production at the government-operated facilities. Army data show that kit production will meet CENTCOM’s September 2004 requirement for 13,872 kits no later than March 2005. DOD’s response to the various acquisition challenges presented by these items was inconsistent, lacked transparency, and did not fully exploit all of its capabilities to influence production. If the Army had forecasted an accurate requirement for the batteries, the need for additional manufacturers would have been recognized and production capacity could have been increased on time to better meet the demand. In the case of the other two items, the rapid increase in demand was not as predictable and DOD responded differently to each. DOD officials made a very aggressive effort to focus on and improve the availability of body armor using regulatory authority to increase production. DOD also provided visibility over the problem and courses of action to Congress. By contrast, DOD’s response to the need for more armor protection for HMMWVs was more measured and its acquisition solution was less transparent. This may have been why members of Congress have expressed specific concerns about the availability of these items and designated specific funds for them. DOD’s inability to distribute sufficient quantities of items to troops adversely affected the delivery of many items. While all seven items may have experienced distribution problems, we found that four items (AAV generators, Interceptor body armor, MREs, and tires) were directly and negatively affected, causing troops to experience shortfalls. Distribution is the operational process of synchronizing all elements of the logistics system to deliver the “right things” to the “right place” at the “right time” to support the combatant commander. This complex process involves the precise integration of doctrine, transport equipment, personnel, and information technology. Among the problems encountered during OIF were (1) conflicting doctrine defining the authority of the geographic combatant commander to synchronize the distribution of supplies from the U.S. to theater; (2) improper packaging of air shipments from the U.S., which forced personnel in theater to spend time opening and sorting shipments as they arrived; (3) insufficient transportation equipment and supply personnel in theater; and (4) the inability of logistics information systems to support the requisition and shipment of supplies into and through Iraq. We found that conflicting doctrine impeded the establishment of a distribution system capable of delivering supplies to the warfighter smoothly and on time. Distribution begins with the supplier, ends with the warfighter, and includes all systems, both physical and informational, needed to move and manage supplies between the two ends. Currently, joint doctrine institutionalizes separate management of sections of the end-to-end distribution system by placing responsibility for logistics support outside the theater with the individual services and the U.S. Transportation Command. However, it also requires the theater commander to synchronize all aspects of logistics necessary to support the mission. This conflicting doctrine is contrary to DOD’s distribution principle of centralized management, which prescribes that one manager should be responsible for the end-to-end distribution of supplies. A SAIC study also reports that joint doctrine does not contain any specific or prescriptive guidance on how the combatant commander might ensure a seamless distribution process. During OIF, the CENTCOM combatant commander could not synchronize the distribution process to support the mission, as required by doctrine, with the level of control that joint doctrine suggests. Instead, the combatant commander had to rely on other DOD components responsible for different functions along the distribution process to gain end-to-end visibility as supplies moved through the distribution system. For example, as we reported in December 2003, although CENTCOM issued a policy requiring the use of radio frequency identification tags on all shipments, to track assets shipped to and within the theater, tags were not always used. Officials from various DOD components reported that, while no data were compiled on the frequency of shipments being labeled with radio frequency identification tags, only about 30 percent of the pallets and containers coming into the theater were tagged. Officials gave various reasons for not following the commander’s policy, such as personnel were not trained to use the technology, tags were not available to place on loads moving through the system, and requirements to use the technology were not compatible with current operating systems. In addition, some Army officials reported that CENTCOM does not have jurisdiction over their process for shipping and tracking assets. Therefore, while CENTCOM issued guidance directing the implementation of an in-transit visibility system that relied on the tags, the command could not control the organizations outside of theater responsible for applying the tags to ensure their proper use. DOD’s distribution principle of maximizing throughput calls for reducing the number of times that supplies are opened and sorted in transit so that distribution to warfighters is prompt. Early in OIF, inefficient packaging and palletizing of air shipments created supply backlogs in Kuwait. In turn, these backlogs delayed the delivery of supplies shipped by air to units in Iraq, which included armored vehicle track shoes, body armor, and tires. Insufficient information was available to allow us to link how each individual item was affected by this distribution problem. According to Army officials, shipments move most efficiently when they are packed and palletized so that they do not have to be unpacked and reloaded while in transit; sending such “pure” shipments to a single unit is a standard peacetime procedure. During OIF, Army officials expected each pallet or container to contain supplies for only one unit or location. However, initial shipments included spare parts and supplies for several geographically separate units. DLA officials stated that U.S. distribution centers could not handle the high volume of supplies and many shipments were loaded with items for more than one customer or “mixed.” They also said that the volume of supplies arriving daily for consolidation into air shipments overwhelmed distribution centers in the U.S. The facilities were structured to handle peacetime requirements and lacked the necessary equipment and personnel to handle the surge capacity associated with wartime. Officials stated that mixed cargo was often sent to the theater without being sorted in order to make room for incoming supplies. Moreover, the lack of sorting continued because of a miscommunication between CENTCOM and DLA, the shipper. CENTCOM expected the peacetime practice of pure pallets would continue during OIF, while DLA officials focused on moving pallets to theater regardless of whether they were pure or mixed. However, at that same time RAND analysts reported that DLA facilities were sending pure pallets to U.S. Army units in Europe and Korea. Once in theater, mixed shipments had to be manually opened, sorted, and re-palletized at theater distribution points, causing additional delays. According to staff at the Theater Distribution Center in Kuwait, some mixed shipments were not marked with all the intended destinations so the contents of the shipments had to be examined. Army officials stated that because mixed pallets of supplies were initially sent to theater, over 9,000 pallets piled up at the center. By the fall of 2003, 30 percent of the pallets arriving at the center still had to be reconfigured in some way. The Army and DLA recognized the problem and worked in conjunction with CENTCOM to establish a “pure palleting” process at the distribution center in Pennsylvania. This resulted in potentially longer processing times in the United States in order reduce customer wait time in theater. According to a RAND study, the pallets arriving in theater between January and August 2003 contained more than 20,000 cardboard boxes of small items. Over 4,300 boxes, about one in every five, were mixed and may have been opened and the contents sorted before being forwarded on to customers, which further delayed delivery. The RAND study showed that it took an average of 25 to 30 days for such mixed boxes to travel from a port in theater, through the theater center, to the supply activity that ordered it. In contrast, when a box with items for only one unit was shipped, it took an average of 5 to 10 days to reach the customer. The lack of sufficient logistics resources, such as trained supply personnel and cargo trucks, hindered DOD’s efforts to move supplies promptly from ports and theater distribution centers to the units that had ordered them, as expected under the DOD principle of optimizing the distribution system. As a result, some troops experienced delays in receiving MREs and other supplies, thereby reducing operational capabilities and increasing risk. According to military personnel, there was a shortage of support personnel in theater prior to and during the arrival of combat forces. Moreover, those that arrived were often untrained or not skilled in the duties they were asked to perform. The effects of these shortages of trained personnel were most evident at the Theater Distribution Center and resulted in delays in the processing (receipt, sorting, and forwarding) of supplies, and backlogs. For example, in the late February to early March 2003 time frame, the Center had only about 200 reserve personnel and did not reach its authorized staffing level of 965 supply/support personnel until May 2003. Moreover, when the center opened, it already had an estimated 5,000-pallet backlog and its commander employed ad hoc work details drawn from surrounding support units to help. Furthermore, organizations outside of the theater, such as TACOM, sent personnel to Kuwait to ensure that specific items, such as tires, were properly processed and sent to the correct customers. Moreover, according to after-action reports, lessons learned studies, and discussions with military personnel, the lack of adequate ground transportation, especially cargo trucks, contributed significantly to the distribution problems. For example, an Army official with the 377th Theater Support Command, which was responsible for logistics support in Kuwait, stated that when combat began his unit needed 930 light/medium and medium trucks but had only 515 trucks on hand. Although his unit “managed” with what was available, he said that the shortage of equipment used to haul supplies into Iraq created a strain on materiel movement. Both the Marine Corps and the 3rd Infantry Division also reported that available transportation assets could not meet their capacity requirements. Even high-priority items, such as food did not always move as intended. According to a CENTCOM after-action report, contractors responsible for moving MREs from ports to the Theater Distribution Center at times had only 50 of the 80 trucks needed. DLA officials reported that at one time 1.4 million MREs were stored at a port in theater, awaiting transport to customers. One cause for the distribution resource problems was the failure of the force deployment planning process to properly synchronize the flow of combat and support forces. DOD normally uses time-phased force deployment data to identify and synchronize the flow of forces during an operation. Key elements of this process include requirements for military transportation companies, contractor provided services, and host nation logistics support. However, the process was “thrown out” in the planning leading up to OIF. Around November 2002, DOD started to use another method for deployment planning, referred to as the Request for Forces process. The Request for Forces process segregated the initial deployment plan into over 50 separate deployment orders. DOD’s priority was for combat forces to move into theater first. Under this new process, logistics unit commanders had to justify the flow of their units and equipment into the theater—often with little success. According to some DOD planners, this approach did not adequately meet planner needs, especially the needs of logisticians. Each deployment order required its own transportation feasibility analysis, which resulted in a choppy flow of forces into the theater. This in turn caused imbalances in the types of personnel needed in the theater to handle logistics requirements. Furthermore, a RAND study suggests that distribution assets, particularly for components such as the 377th Theater Support Command and the 3rd Corps Support Command, were either deleted from the deployment plan or shifted back in the deployment timeline. As a result, logistics personnel could not effectively support the increasing numbers of combat troops moving into theater. DOD took steps to mitigate the impact of some distribution problems, but these did not always work. For example, according to a RAND report, priority was given to moving critical supplies, such as food, water, ammunition, and fuel. Other items, to include spare parts, were moved on a very limited, opportune basis. As a result, according to one after-action report, it took nearly 2 weeks after U.S. forces moved into Iraq for the first shipment of spare parts to reach combat forces, and this delivery was inadequate to support an entire division engaged in combat operations. Moreover, the Army confirmed that after 45 days of enemy engagement, moving water still consumed over 60 percent of available theater transportation trucks. A Marine Corps after-action report listed repair parts distribution as a “near-complete failure.” In order to move supplies to the troops, both the Army and Marines contracted for additional trucks. For example, the Marine Corps contracted for $25.6 million in services from several commercial trucking companies to support combat operations. It justified this action by identifying deficiencies in the provision of transportation support they expected from other components in theater. However, Army officials stated that its contractors did not always have sufficient trucks to move supplies as required because contracts did not specify a level of operational readiness for trucks. As a result, even if trucks were available, they were not always functional. In its after-action report, the 3rd Infantry Division stated that available transportation assets and contracted host nation support could not meet divisional requirements for carrying capacity. According to military doctrine, financial, communication, and information systems used to support supply distribution must be accessible, integrated, connected, and contain accurate and up-to-date information. In other words, these systems need to provide a seamless flow of all types of essential data from the point of production to the warfighter. However during OIF, the logistics systems used to order, track, and account for supplies were not well integrated; moreover, they could not provide the essential information necessary to effectively manage theater distribution. Data Transmission Problems Hindered Supply Requisitions Logistics information systems in use during OIF could not effectively transmit data, making it difficult to process and track requisitions for critical supplies. A number of factors limited communications between the various logistics systems, including a lack of bandwidth in the theater to satisfy all systems users, systems that were incompatible with each other, units lacking the necessary equipment or being delayed in connecting to the supply system, and distances being too great for supply activities to effectively transmit data by radio. For example, the supply activities in the Army’s 3rd Infantry Division only received about 2,500 of the over 10,000 items—including armored vehicle track shoes, lithium batteries, and tires—they requisitioned between August 2002 and September 2003. Officials at the 3rd Infantry Division attributed this issue specifically to communications problems between systems. Army officials also attribute poor communications as a major factor leading to a $1.2 billion discrepancy between the amount of supplies shipped to the theater and the amount actually acknowledged as received, which we reported on in December 2003. The Marine Corps similarly experienced communications problems between its information technology systems during OIF. Marine forces deployed with two different versions of the Marine Corps Asset Tracking Logistics & Supply System logistics information system, which were not compatible with each other. Marine Corps units in the 1st Marine Expeditionary Force were using the Asset Tracking Logistics & Supply System I for frontline logistics, while the 2nd Marine Expeditionary Force was using the Asset Tracking Logistics & Supply System II+ for theater support. Therefore, requisitions from Marine support activities at the front lines could not be transmitted directly to Marine logistics units in the rear. Instead, the Marines used other processes, such as e-mail and satellite phone to requisition supplies. However, this left ordering units without any information on the status of their requisitions. As a result, many duplicate orders were submitted and may have unnecessarily added more cargo to the already overwhelmed theater distribution system. A study by SAIC also noted that the lack of logistics communications was a weaknesses during OIF. The Army’s Deputy Chief of Staff for Logistics has since provided satellite communications equipment to the units operating in theater to help alleviate these communication difficulties. Available Logistics Information Systems Did Not Provide Adequate Visibility Another major problem encountered during OIF was a lack of adequate visibility over supplies in the distribution system. While the operation order for OIF called for the use of radio frequency identification, tracking was limited primarily by a failure to place radio frequency identification tags on all shipments sent to the theater and a lack of fixed scanners needed to read radio frequency identification tags. For example, some ports, such as one we observed in Bahrain, had no scanners at all. Another equally challenging problem was that scanners often failed under the harsh environmental conditions. According to one Army assessment, only 50 percent of the scanners inspected in Kuwait were operational. In addition to problems with the radio frequency identification technology, there was no suitable information system infrastructure to track and identify supply assets. SAIC reported that the Joint Total Asset Visibility system could not provide commanders with the asset visibility they needed, while military officials in theater told us they knew of no joint system that tracked supplies from the point of production to the warfighter. Rather, logistics personnel relied on a number of unintegrated tracking systems. As a result, CENTCOM and the major combat forces in the Army and Marine Corps could not adequately track or identify supplies moving to and within the theater. The lack of in-transit visibility over supplies significantly effected distribution. For example, an Army official responsible for logistics operations at the Theater Distribution Center noted that incomplete radio frequency identification tags forced the center’s personnel to spend time opening and sorting incoming shipments. This, in turn, significantly increased processing time, contrary to DOD’s principle of maximizing throughput. As a result, according to a CENTCOM issue paper, around 1500 Small Arms Protective Inserts plates for body armor were lost. Another CENTCOM report stated that 17 containers of MREs were left at a supply base in Iraq for over a week because no one at the base knew they were there. According to Marine Corps officials, they became frustrated with their inability to “see” supplies moving towards them and lost trust and confidence in the logistics system and processes. For example, the Marines could only verify the receipt of 15 out of 140 AAV generators that were shipped to them. Changes to Unit Address Codes Disrupted Logistics Information Systems The use of new OIF-specific address codes, known as Department of Defense Activity Address Codes, for ordering supplies limited the effectiveness of logistics information technology. The codes ensure that supplies are sent to the correct address of the ordering unit and that the correct unit is charged for the supplies. Because of poor linkages between Army logistics and financial systems, a problem of where to ship and who to bill surfaces unless a unit or activity deploys intact. For example, while some parts of the 3rd Infantry Division remained in the U.S. during OIF, the majority of the division deployed to Iraq. To ensure that ordered supplies went to the correct location of a deployed unit and that the unit was correctly charged, new codes specifically set up for OIF were issued to deploying entities. Meanwhile, the original codes remained with that portion of the unit that did not deploy. Approximately 10,000 new codes were created for OIF. This caused significant disruptions to logistics information systems as new data had to be manually updated in each system. Many problems occurred during this process, such as the issuance of inactive codes, use of codes already assigned to other units, and incorrect data being input into logistics systems. These problems were another factor contributing to the $1.2 billion discrepancy between supplies shipped and supplies received. Furthermore, there was a delay in updating the master code schedule that contained all the locations associated with the new codes. This caused significant problems for the Theater Distribution Center. According to an April 10, 2003 Theater Distribution Center log entry, “Upwards of 50% of pallets shipped to Doha and 20-30% of pallets shipped to Arifjan are being returned/rejected with the reason being, ‘it doesn’t belong here.’ The master are not being updated when units move in or out and the is double and triple handling cargo.” Given that the center was already experiencing problems with personnel and equipment shortages; additional handling of the same supplies increased their difficulties. Logistics Personnel Were Not Trained on Some of the Logistics Information Systems in Use A lack of adequate training for logistics personnel also negatively impacted the performance of logistics information systems. For example, according to a 101st Airborne after-action review, loading codes and interfacing with data caused problems that training could have resolved. Lack of training also contributed to problems with asset visibility. According to a logistics study, units were generally not trained in the use of radio frequency identification devices. Marine Corps officials likewise stated that their personnel were untrained in the use of tracking equipment. As a result of logistics issues that arose during OIF, DOD, the services, and the defense agencies have undertaken a number of actions to improve the availability of equipment and supplies during ongoing and future operations. Some are short-term actions aimed at improving immediate supply availability. For example, as a result of the battery shortage, the Joint Staff Logistics Directorate established in July 2003 a revolving “critical few list” of approximately 25 items that the services and various commands report as most critically needed worldwide. The Joint Staff Logistics Directorate, in conjunction with the services, determines the causes of the shortages and makes recommendations to the Office of the Secretary of Defense and the services for corrective action and execution. Other actions are long-term, systemic changes that are designed to improve the overall effectiveness of the supply system. While we did not evaluate the changes’ potential for success, we observed that the majority of them focus on the distribution aspects of logistics problems, not the full range of supply deficiencies we identified. However, other GAO engagements are currently underway to assess some of these initiatives. (Specific short-term and long-term actions related to each item are noted in the appendixes.) Inaccurate and inadequately funded war reserve requirements. The Army has not updated or run its war reserve model in order to systemically ensure the accuracy of its war reserve requirements. Due to its risk management decisions, it has also not funded its war reserve requirements. However, the Army has made manual changes to its war reserve inventory levels, based on the usage of certain items during OIF. Inaccurate supply forecasts. DOD and the services have not undertaken systemic actions specifically aimed at improving the accuracy of supply forecast. However, DLA has undertaken action to improve its customer support through its Customer Relationship Management program, which could potentially improve its ability to forecast customer demands. Insufficient and delayed funding. The Army has not undertaken long-term actions to expedite its funding process during contingencies to be more responsive to customer needs. However, it has undertaken short-term actions to obtain additional funding for specific supply items. For example, AMC directed funding towards armored vehicle track shoes. Delayed acquisition. DOD has not undertaken long-term actions to address acquisition issues that contributed to shortages of certain case study items. However, DLA has undertaken other actions to improve its ability to leverage industrial-base capabilities. DLA seeks to improve industrial-base support through its collaborative planning initiatives with industry. For example, its Strategic Materiel Sourcing program establishes long-term contracts for approximately 500,000 (of a total 4.6 million) items the agency considers critical to its customers’ needs. In addition, its Strategic Supplier Alliances program establishes formal relationships with the agency’s top 32 sole source suppliers. Ineffective distribution. DOD has undertaken many initiatives to improve its distribution system, including the Secretary of Defense’s designation of the U.S. Transportation Command as the Distribution Process Owner. According to a Secretary of Defense memorandum, the U.S. Transportation Command is responsible for improving the overall efficiency and interoperability of distribution-related activities. In January 2004, the command established a CENTCOM Deployment and Distribution Operations Center, which is responsible for directing airport, seaport, and land transportation operations within the OIF theater. DOD’s Pure Pallet initiative seeks to reduce inefficiencies in the distribution process and improve in-transit tracking of shipments by building containers and pallets with radio frequency identification tags that are designated to units within a specific geographic location. The Army and DLA have also undertaken numerous actions to improve the distribution system. The Army has identified four areas of focus for the next 2 years: (1) “Connect Army Logisticians” by using technology to provide logisticians and warfighters with real-time visibility over distribution and warfighter requirements, (2) “Modernize Theater Distribution” by developing a distribution-based logistics system to support the warfighter, (3) “Integrate the Supply Chain” by providing a system wide view of the supply chain through the integration of processes, information, and responsibilities, and (4) “Improve Force Reception” by enhancing the Army’s ability to deploy forces to theaters of operation by establishing an early port opening capability that will result in-theater expansion and increased theater sustainment. Furthermore, the Army has expanded its Rapid Fielding Initiative, which accelerates acquisition and fielding processes to ensure that troops deploy with high-priority items. DLA has also expanded its Forward Stocking Initiative by opening a fifth forward stock depot in Kuwait to reduce customer wait time and transportation costs. Moreover, AMC and DLA have formed a partnership in which they will explore the use of commercial systems to increase supply readiness, improve in-transit visibility, cut costs, and improve parts resupply to field locations. In times of war, the defense supply system needs to be as responsive and agile as the combat forces that depend upon it. In the Quadrennial Defense Review for 2001, DOD stated its intention to transform its logistics capabilities to improve the deployment process and implement new logistics support tools that accelerate logistics integration between the services and reduce logistics demand and cost. OIF tested this system as well as industry’s capability to meet rapidly increasing demands, and, in many instances, the system failed to respond quickly enough to meet the needs of modern warfare. While units in Iraq achieved success on the battlefield, the supply chain did not always adequately support the troops and combat operations. A number of problems prevented DOD from providing supply support to its combat forces at many points in the process, which reduced operational capabilities and forced combat commanders to accept additional risk in completing their missions. An inability to adequately predict the needs of warfighters at the onset of the war, coupled with a slow process for obtaining additional resources once those needs were identified, resulted in critical wartime shortages. In addition, even when sufficient supply inventories were available within the system, they were not always delivered to the combat forces when they needed them. All of these problems were influenced to some extent by a lack of accurate and timely information needed to support processes and decisions. Unless DOD’s logistics process improves the availability of critical supplies during wartime, combat forces engaged in future operations will likely be exposed to risks similar to those experienced in Iraq. These risks will continue to exist unless DOD is able to improve the availability of war reserve supplies at the start of operations and overcomes problems forecasting accurate wartime demands. Moreover, delays in the Army’s funding processes will continue to place U.S. troops at risk by not enabling AMC to swiftly meet surges in wartime demands. In addition, future combat operations may be adversely affected unless DOD is able to anticipate acquisition delays that could affect the availability of critical supplies and provide transparency into how it expects to mitigate production risks. Finally, merely increasing the availability of supplies in the inventory will not help combat forces in the field. Troops will continue to face reduced operational capabilities and unnecessary risks unless DOD’s supply chain can distribute the right supplies to the right places when warfighters need them. While DOD took immediate steps to overcome some shortages, and is beginning to develop solutions to some of the problems identified during OIF, most systemic solutions have tended to center on resolving distribution problems. If supply logistics transformation is to be successful, DOD’s supply chain reform will need to include solutions for the full gamut of identified deficiencies contributing to supply shortages during OIF. An integrated approach to addressing all of these deficiencies will increase DOD’s potential to achieve responsive, consistent, and reliable support to warfighters, a goal envisioned in the National Military Strategy and its logistics concepts and necessary to support the continued dominance of the U.S. military. To improve the effectiveness of DOD’s supply system in supporting deployed forces for contingencies, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions and specify when they will be completed: Improve the accuracy of Army war reserve requirements and transparency about their adequacy by updating the war reserve models with OIF consumption data that validate the type and number of items needed, modeling war reserve requirements at least annually to update the war reserve estimates based on changing operational and equipment requirements, and disclosing to Congress the impact on military operations of its risk management decision about the percentage of war reserves being funded. Improve the accuracy of its wartime supply requirements forecasting developing models that can compute operational supply requirements for deploying units more promptly as part of prewar planning and providing item managers with operational information in a timely manner so they can adjust modeled wartime requirements as necessary. Reduce the time delay in granting increased obligation authority to the Army Materiel Command and its subordinate commands to support their forecasted wartime requirements by establishing an expeditious supply requirements validation process that provides accurate information to support timely and sufficient funding. We also recommend that the Secretary of Defense direct the Secretary of the Navy to improve the accuracy of the Marine Corps’ wartime supply requirements forecasting process by completing the reconciliation of the Marine Corps’ forecasted requirements with actual OIF consumption data to validate the number as well as types of items needed and making necessary adjustments to their requirements. The department should also specify when these actions will be completed. We recommend that the Secretary of Defense direct the Secretary of the Army and Director of the Defense Logistics Agency to take the following two actions: minimize future acquisition delays by assessing the industrial-base capacity to meet updated forecasted demands for critical items within the time frames required by operational plans as well as specify when this assessment will be completed and provide visibility to Congress and other decision makers about how the department plans to acquire critical items to meet demands that emerge during contingencies. We also recommend the Secretary of Defense take the following three actions and specify when they will be completed: revise current joint logistics doctrine to clearly state, consistent with policy, who has responsibility and authority for synchronizing the distribution of supplies from the U.S. to deployed units during operations, develop and exercise, through a mix of computer simulations and field training, deployable supply receiving and distribution capabilities including trained personnel and related equipment for implementing improved supply management practices, such as radio frequency identification tags that provide in-transit visibility of supplies, to ensure they are sufficient and capable of meeting the requirements in operational plans, and establish common supply information systems that ensure the DOD and the services can requisition supplies promptly and match incoming supplies with unit requisitions to facilitate expeditious and accurate distribution. To improve visibility over the adequacy of the Army’s war reserves, Congress may wish to consider requiring the Secretary of Defense to provide it information that discloses the risks associated with not fully funding the Army war reserve. This report should include not just the level of funding for the war reserve, which is currently reported, but timely and accurate information on the sufficiency of the war reserve inventory and its impact on the Army’s ability to conduct operations. In written comments on a draft of this report, DOD agreed with the intent of the recommendations and cited actions it has or is taking to eliminate supply chain deficiencies. Some of the actions could resolve the problems we identified when completed. Because DOD did not specify dates for completing all of its actions, we modified our recommendations to require specific time lines for their completion. DOD is taking other actions that are not sufficient to fulfill our recommendations and, in several cases the department’s comments did not specifically address how it plans to improve current practices. In addition to our evaluation below, we address each of DOD’s comments in appendix XI where its complete response is reprinted. The department cited several actions it is taking to improve the accuracy of war reserve requirements, support prewar planning through supply forecasting, minimize future acquisition delays, and improve supply distribution. However, it did not clearly identify time lines for fully implementing most of these actions. For example, initiatives to improve modeling and data for determining war reserves had no dates for implementation. In some cases, the department provided tentative schedules, such as with the fielding of the Army’s Logistics Modernization Program to improve supply forecasting, which it expects to be in full use in fiscal year 2007. In another instance, it provided a May 2006 deadline for the developing an information technology plan to improve distribution, but did not indicate when the plan’s recommendations will be implemented. Therefore, we have modified our recommendations to require that DOD specify when these actions will be completed. In two instances DOD cited actions we do not consider sufficient to fulfill our recommendations. The department stated that its annual Industrial Capabilities Report to Congress, as well as the budget process and other forums, provide adequate information on acquisition of critical items. While we agree that the report provided visibility about some items, such as body armor, it did not identify concerns about acquiring up-armored HMMWVs and kits. Therefore, we do not believe current reporting forums provide Congress with the consistent visibility and information needed to make informed decisions on actions that could speed the acquisition of critical items. In another instance, DOD cites the establishment of the Deployment and Distribution Center in CENTCOM as its means of testing improvements to distribution capabilities. While the center may improve deployable logistics capability, the department did not commit to actions, as we recommended, that would ensure through simulation and field training that there are sufficient trained personnel and equipment to meet the requirements of the operational plans—a problem in theater before and during the arrival of combat forces. Therefore, we continue to believe these recommendations have merit. DOD did not commit to any specific action to improve transparency to Congress of the risks associated with inadequately funding Army war reserves. The department said this risk is already reported to Congress in the budget process and a number of other ways. As stated in this report, the methods cited by DOD, such as the budget documentation, do not ensure consistent transparency by clearly stating the operational risks of underfunding the Army war reserves. Therefore we believe our recommendation has merit and have added a matter for congressional consideration that suggests Congress may wish to require DOD to disclose the risks associated with not fully funding the Army war reserve. While DOD agreed with the intent of three recommendations, it did not commit to any specific actions to address them. The recommendations would (1) ensure item managers are provided operational information in a timely manner, (2) reduce the time delay in granting increased obligation authority to AMC and its subordinate commands, and (3) revise joint doctrine to clarify responsibility and authority for synchronizing distribution. We believe that these recommendations have merit and have cited the reasons in our comments in Appendix XI. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army and the Navy; the Commandant of the Marine Corps; the Commander, U.S. Central Command; the Commander, U.S. Transportation Command; the Director of the Defense Logistics Agency; and other interested parties. If you or your staff members have any questions regarding this report, please contact me at (202) 512-8365. Key contributors to this report are listed in appendix XII. To address our objectives, we employed a case study approach, selecting nine supply items with reported shortages as a way to assess the availability of supplies and spare parts during Operation Iraqi Freedom (OIF). We judgmentally selected the nine items because we believed they presented possible shortages with operational impacts based on our prior work on OIF logistics and other sources such as military “after-action” reports on OIF operations, military and contractor “lessons learned” studies, briefings, congressional testimonies, and interviews with Department of Defense (DOD) and military service officials covering the time period between October 2002 and September 2004. We selected the items to encompass a variety of supply sources and users within DOD, the Army, and the Marine Corps. The items we selected and the supply sources for each item are shown in table 6. To verify the existence of reported shortages and to determine their extent, we interviewed DOD logistics officials and industrial-base suppliers. We also collected and analyzed supply data, such as requirements, customer demands, inventory levels, production levels, back order quantities, and funding levels, for the period between September 2001 through September 2004 for the selected items. We considered an item to be in short supply if the data we obtained showed that demands placed by the warfighter exceeded availability in the supply system. To determine the impact of shortages for the selected items, we interviewed officials in Army and Marine Corps combat forces that were deployed in OIF and also reviewed DOD and military services’ after-action reports, lessons learned studies, readiness reports, and other documents. For a complete list of these organizations, see table 7. To determine what deficiencies contributed to identified supply shortages, we interviewed officials and collected documentation from DOD’s supply management organizations. On the basis of case studies, we identified deficiencies that affected the supply of two or more of the items. We analyzed data from DOD logistics agencies, operational units, and service and geographic commands to evaluate the significance of these deficiencies to DOD’s overall logistics system. We also reviewed prior GAO reports, DOD and military services’ after-action reports, military and contractor lessons learned studies, DOD directives and regulations, and reports by DOD and external experts, including Accenture, the Center for Naval Analysis, the RAND Corporation’s Arroyo Center, the Science Applications International Corporation, and the U.S. Army Audit Agency. In addition, we analyzed supply data for each item to identify and corroborate deficiencies contributing to item shortages. To determine what actions DOD has taken to improve the availability of supplies for current and future operations, we collected data from military service and joint command headquarters personnel to identify short- and long-term efforts to address supply shortages. However, we did not evaluate their potential for success. We also reviewed DOD logistics and strategic planning documents that provide guidance on improving logistics support for military readiness. We assessed the reliability of the supply data we obtained by interviewing agency officials knowledgeable about the data and corroborated it with other information about supply shortages gathered from other DOD and military service organizations. When data specifically for Iraq were not available, we used worldwide data since OIF received supply priority. With the exception of data on track shoes, we determined that the data were sufficiently reliable for our purposes. In the case of track shoes, we determined that the data provided by the Army’s Tank-automotive and Armaments Command (TACOM) were not sufficiently reliable for our purposes and did not use it. However, we were able to identify relevant information from TACOM’s periodic reporting to describe the item’s supply status. We determined that these data were sufficiently reliable for our purposes. In the case of lithium batteries, the Communications-Electronics Command (CECOM) switched database systems in July 2003. We determined that the data from the new system were sufficiently reliable for the purposes of showing trends and graphing, but we based our findings on program data prior to the system change. We also identified relevant information from other DOD sources to confirm reported shortages of lithium batteries. The limitations of data collected for armored vehicle track shoes and lithium batteries are included in appendixes III and VI, respectively. We also determined that funding data were sufficiently reliable for our purposes by comparing data received from multiple sources within DOD. We performed our audit from March 2004 through March 2005 in accordance with generally accepted government auditing standards. The Marine Corps’ AAV is a full-tracked landing vehicle designed to carry up to 25 people from ship to shore and is used as an armored personnel carrier on land. The Marine Corps used more than 550 AAVs to transport personnel during operations in OIF. Among the critical parts of the AAV is its generator, which provides needed electrical power (see fig. 6). The supply and distribution of AAV generators is a shared responsibility. The Marine Corps Logistics Command manages the supply of repairable generators; the Defense Supply Center Columbus supplies new generators. During OIF, the 2nd Force Service Support Group was responsible for moving supply shipments from the port to Iraq, and the 1st Service Support Group had responsibility for moving supplies once they reached Iraq. The 3rd Assault Amphibian Battalion, part of the 1st Marine Expeditionary Force, was in charge of maintenance for all AAVs in theater. AAV generators were not available to the warfighter at some point between October 2002 and September 2004. We consider generators to have been a shortage because 84 were ordered, but only 15 were received. The Marine Corps’ 3rd Assault Amphibian Battalion experienced a shortage of generators needed to repair AAVs during and after major combat operations in Iraq, according to officials. Both the Marine Corps Logistic Command and GAO have reported that the long distances the vehicles traveled, combined with combat conditions, placed the equivalent of 5 years of wear and tear on the vehicles over a 6- to 8-week period. As a result of this accelerated wear and tear, the vehicles’ parts—including generators—wore out quickly. To meet the rapid rise in demand (see fig. 7), the battalion submitted orders for 84 generators between January and June 2003. According to supply management data, the Defense Supply Center Columbus sent 64 new generators and the Marine Corps Logistics Command sent 76 repaired generators—a total of 140—to the theater during major combat operations. However, the battalion reported that it received only 15 generators. Officials from the 1st Force Service Support Group in Iraq stated they did not know why they did not receive all of the generators shipped from the Marine Corps Logistics Command and the Defense Supply Center Columbus. While a 3rd Assault Amphibian Battalion official stated their demand for generators exceeded the number received in theater, they did not report any decline in AAV operational readiness. The reported operational readiness of AAVs in the Iraqi theater remained at about 89 percent most of the time between February 2003 and October 2003. However, in order to maintain this readiness rate, a 3rd Assault Amphibian Battalion official noted that spare parts from about 40 non-operational vehicles were used to support combat capable vehicles. Poor asset visibility in the Marine Corps’ in-theater distribution system contributed to the shortage of generators needed to repair AAVs. Although good asset visibility is one of the main tenets of logistics supply systems, the Marines had difficulty maintaining visibility over the 140 generators shipped to the theater. Marine Corps Logistics Command and Defense Supply Center Columbus officials tracked generators to the theater, but their visibility over these shipments ended there. Once the generators arrived in theater, the 2nd Force Service Support Group became responsible for maintaining visibility of supply. However, they stated they did not have visibility of the generators shipped into Iraq. The 1st Force Service Support Group, which directly supported units fighting in Iraq, also indicated an inability to track requisitioned supplies. While 15 of the generators were received by the 3rd Assault Amphibian Battalion, neither the Marine Corps’ Force Service Support Groups nor we were able to track the remaining 125 generators. A contributing factor to the shortage of generators was difficulties the Marine Corps faced in maintaining visibility over requisitioned and warehoused spare parts because of incompatible and unstable software and other visibility systems. Before OIF began, the Marines experienced difficulties maintaining visibility over the generators in their in-theater distribution center in Kuwait. Defense Supply Center Columbus officials reported that generators were shipped to the theater to support requirements forecasted by deploying units. However, according to an official with the 3rd Assault Amphibian Battalion, instead of being delivered to the units, generators were warehoused in the distribution center. One reason for the poor asset visibility at the warehouse in February 2003, was the failure of the warehousing software—Storage Retrieval, Automated, Tracking, Integrated System—to work properly. Moreover, the 1st Force Service Support Group used one version of the Asset Tracking Logistics and Supply System (ATLASS) requisitioning software in theater, while the 2nd Force Service Support Group used another version, ATLASS II+. Because the two versions could not interface, personnel of the 1st Force Service Support Group reported that they reentered requisition information manually to move data between the systems. Personnel at the Marine’s distribution center in Kuwait entered requisitions into the Supported Activity Supply System, a stand-alone inventory system. According to Marine Corps Logistics officials, neither system could track requisitions or parts related to them through the supply requisition process. In addition, Marine Corps personnel were expected to use radio frequency identification technology to help maintain asset visibility during supply distribution. According to 2nd Force Service Support Group personnel, Marine units in theater did not have sufficient training or equipment to read the tags in order to support the use of this technology. When the supply system did not respond to the demand for generators, Marine Corps personnel noted that units went outside the supply system, through e-mail and telephone communications, to locate supplies, such as generators, for AAVs and other equipment. To minimize data entry errors, Marine Corps personnel developed an electronic process to transfer data between the ATLASS and ATLASS II+ software systems. According to logistics personnel, to improve visibility of requisitions through the system, the Marine Corps streamlined its requisitioning process by using ATLASS to enter requisitions into the Supported Activity Supply System and by eliminating the use of ATLASS II+ in theater. To support greater use of radio frequency identification tags in theater, the Marine Corps, according to the 2nd Force Service Support Group officials, has provided training to personnel deploying to Iraq and increased the use of the technology to improve asset visibility. According to 2nd Service Support Group officials, the Marine Corps is evaluating an Army information system that monitors assets moving through the supply system to determine if the Army’s system can be adapted for Marine Corps use. The 2nd Marine Expeditionary Force has developed a Marine Air-Ground Task Force Distribution Center initiative. The Marines stated that the initiative, implemented in September 2004, helps them manage the distribution system by bringing together the Traffic Management Office, deployed supply units, and transportation assets to replicate the in-theater supply process. The initiative will enable them to fully replicate the supply system, including the use of radio frequency identification tags and satellite transponders. During OIF, U.S. forces relied heavily on armored vehicles such as Abrams tanks and Bradley Fighting Vehicles. For example, at the beginning of combat operations in Iraq, the 3rd Infantry Division had a fleet of 252 Abrams tanks and 325 Bradley Fighting Vehicles drawn from Army prepositioned stock. Troops used Abrams tanks to lead attacks in urban areas with the support of infantry equipped with Bradley Fighting Vehicles. Army officials said that Abrams tanks and Bradley Fighting Vehicles were extremely effective for operations in urban terrain. Critical components of both types of armored vehicles are the track that enables the vehicles to move, which are composed of dozens of metal shoes (see fig. 8). The Army’s TACOM Track and Roadwheel Group buys the track shoes that are used on Abrams tanks and Bradley Fighting Vehicles. Goodyear is the sole source supplier of Abrams track shoes and the major producer of Bradley track shoes. VAREC also produces Bradley track shoes. In fiscal year 2003, TACOM reported spending $195.2 million to purchase track shoes for all tanks and vehicles. However, worldwide demand for Abrams and Bradley track shoes totaled $257.4 million. Of the $195.2 million, TACOM reportedly spent $98.6 million on Abrams track shoes and $52.4 million on Bradley track shoes. We were unable to obtain reliable data on forecasted requirements, demands, back orders, and inventory for track shoes from TACOM’s Track and Roadwheel Group. The group’s officials told us that because the models and studies used to compute the data can produce inaccurate results, they could not validate the data. As a result, we were unable to document the extent of shortages based on these data. However, group officials were able to provide us with data used to inform AMC on the status of track shoe shortages. According to TACOM, these data are based on information provided by units in theater and best represent true demand. We corroborated this secondary data with classified data and used it in our analysis. Track shoes for the Abrams tanks and Bradley Fighting Vehicles were not available to the warfighter at some time between October 2002 and September 2004. We consider this item to have a shortage because demand exceeded the amount of inventory available to meet the needs of the war fighters. U.S. forces and logistics personnel reported critical shortages of track shoes for Abrams tanks and Bradley Fighting Vehicles during OIF, and these shortages negatively affected their mission. In undertaking their mission, U.S. forces subjected these tanks and vehicles to the equivalent of 3 years of high-intensity training during major combat operations in Iraq. Because of the extensive mileage placed on the tanks and Bradley vehicles—exacerbated by bad road conditions and extreme heat—vehicle parts, particularly track shoes, wore out quickly. Although TACOM was able to meet the track shoe demands of units preparing to deploy as well as those already deployed in OIF, it began to experience difficulties in providing track shoes to units in theater around April 2003. The demand for Abrams tank and Bradley Fighting Vehicle track shoes in May 2003 was 5 times the March 2003 forecasted demand (see table 8). To meet the surge in demand for Abrams track shoes, TACOM negotiated with Goodyear to increase the production rate from 15,000 Abrams track shoes per month (from the normal peacetime rate of 10,000 per month) to 17,000 per month for December 2002, then to 20,000 per month in May 2003, and to 25,000 per month in July 2003. However, these increases in production still were not sufficient to meet OIF demands. In May 2003, for example, the actual demand for Abrams track shoes rose to more than 55,000. As a result of track shoe shortages, some Abrams tanks and Bradley Fighting Vehicles could not operate during the summer months in 2003. For example, the 4th Infantry Division reported it could not obtain sufficient quantities of track shoes to meet operational needs. At one point during post-combat operations, the division had an operational requirement for 23,626 Abrams track shoes, of which 8,002 were shipped, but only 1,028 were received. To support the Bradley Fighting Vehicles, the division had an operational requirement of 29,911 track shoes, of which 4,591 were shipped, but only 744 were received. As a result of its inability to obtain more track shoes and other suspension parts, the 4th Infantry Division reported its readiness rates for both types of combat vehicles deteriorated. For example, one of its brigades reported that 11 of its 44 Abrams tanks were unavailable during post combat operations because of the lack of track shoes. Track shoe shortages also negatively impacted the 3rd Infantry Division. On June 11, 2003, the division reported that of the 185 Abrams tanks it had on hand, 111 (60 percent) were deemed “non-mission capable due to supply.” For the 237 Bradley Fighting Vehicles it had on hand, 159 (67 percent) were deemed non-mission capable due to supply. According to 3rd Infantry Division officials, the reason tanks and vehicles were unavailable was because replacement track shoes were not available to the units. For example, between April 16-18, 2003, one divisional supply support activities in Kuwait had 22,074 Abrams track shoes and 18,762 Bradley track shoes on back order. To alleviate the impact of track shoe shortages on Bradley Fighting Vehicle readiness, theater commanders wanted to bring an additional 1,407 up- armored HMMWVs into theater. However, as detailed in appendix X, the procurement of additional up-armored HMMWVs was also problematic. Inaccurate war reserves requirements, inaccurate forecasted requirements, and erratic funding affected TACOM’s ability to provide track shoes. Inaccurate war reserves requirements negatively affected TACOM’s ability to provide track shoes to units in theater at the beginning of OIF. Governed by Army Regulation 710-1, war reserves are intended to provide the Army with interim support to sustain operations until it can be resupplied with materiel from the industrial base. For TACOM, the war reserve requirement for 3,635 Abrams track shoes and 1,800 Bradley track shoes identified in September 2001 was not enough to support OIF demands. Although the war reserve requirement increased in December 2002, the new requirement for 5,230 Abrams track shoes and 5,626 Bradley track shoes was still not enough to meet demands. To more accurately reflect track shoe usage in OIF, TACOM officials, in December 2003, increased the war reserves requirements to 32,686 Abrams track shoes and 34,864 Bradley track shoes. TACOM officials made the change manually rather than using the Army War Reserve Automated Process, which was last run in fiscal year 1999. Since then, the number and type of vehicles have changed, but the official process has not been performed again to update the requirements. TACOM officials have been waiting for input based on the defense planning guidance from the Department of the Army to initiate a new process. At the time of our review, they were not certain when they would receive the guidance. Until the model is run, TACOM officials will continue to make manual changes to war reserves requirements. In addition, both we and Accenture have questioned the validity of the Army’s war reserve requirements. In a 2001 report, we found that war reserve requirements could be inaccurate because the calculations were not updated to reflect new consumption rates; requirements determination methodology might not be consistent with planned battlefield maintenance practices; and requirements were based on internal estimates of what the industrial-base could provide rather than on well-defined industry data itself. A 2003 study by Accenture concluded that part of the reason for the low war reserve requirements was that the forecasting process is labor intensive, time consuming, and suffers from inaccurate input data. In fiscal year 2003, TACOM underestimated the amount of Abrams and Bradley track shoes needed worldwide. Although TACOM revised its requirements at the end of each quarter, its estimates for Abrams and Bradley track shoes still fell short. For example, in April 2003 the forecasted monthly requirement for Abrams track shoes was 11,125, which was less than half of the actual demand of 23,462 shoes. The track shoe budget forecasts further illustrate the discrepancy between forecasted and actual requirements. Based on its budget-forecasting tool, TACOM forecasted it would need $46.8 million for Abrams track shoe purchases for fiscal year 2003. At the end of the year, actual demands for Abrams track shoe totaled $194.9 million—a 416 percent increase. For Bradley track shoes, the group forecasted it would need $17.8 million in fiscal year 2003 to meet customer demands. However, actual demands totaled $62.6 million—a 351 percent increase— by the end of the fiscal year. Even if TACOM’s revised track shoe estimates had been accurate, it would have been too late to meet the summer 2003 demand. In order to have supplies on hand when demands are received, item managers need to award contracts allowing for sufficient procurement lead-time. TACOM officials explained that track shoe manufacturers need an average of 4 to 6 months to produce and deliver track shoes once they receive a contract. The high demands for track shoes in the late spring and summer of 2003 were not forecasted in October 2002, which would have been the time when contracts should have been awarded so that TACOM could adequately meet customer needs. The failure to forecast the high demand experienced during the early months of OIF was partly due to the requirements forecasting model used and to the lack of information and guidance provided to TACOM. In its conclusions, the Accenture study on track shoe shortages found that the requirements forecasting model failed to accurately forecast future demands because the model uses a simple moving average based on 24 months of historical demand that does not support dynamic changes in item usage. This meant that large increases in demand during the last couple of months of the 24-month period would not result in a corresponding increase in the forecasted demand. In addition, TACOM officials stated they did not receive adequate planning guidance on operational plans from AMC prior to the onset of combat operations that they could incorporate into their forecasts for track shoes. Consequently, TACOM determined the requirements based on the model and on the item managers’ expertise and knowledge of the item, as allowed under Army Regulation 710-1. Subsequent requirements throughout fiscal year 2003 also were understated. TACOM officials reported that although they regularly requested information about the track shoes’ usage and durability, which would have helped them better gauge actual demand, they received limited information and input from units in the field. As OIF continued throughout 2003, TACOM held teleconferences with AMC Logistics Support Element representatives and with units in theater to obtain updated information about item usage. During fiscal year 2003, TACOM did not receive sufficient obligation authority in time to buy track shoes to meet growing demands. TACOM’s total funding requirements in fiscal year 2003 amounted to $257.4 million— $194.9 million for Abrams track shoes and $62.5 million for Bradley track shoes. TACOM officials told us that they could not buy sufficient track shoes because they had received only $216 million in obligation authority to buy all of the items they managed. However, TACOM spent only $151.0 million on track shoes—$98.6 million for Abrams tanks and $52.4 million for Bradley Fighting Vehicles. In addition to insufficient obligation authority, the uncertainty of the funding flow affected the manufacturer’s ability to produce track shoes. A primary producer of track shoes for TACOM, Goodyear, was in danger of closing down its track shoe production plant in April 2003 because of a lack of contracts. TACOM officials stated that they could not award contracts consistently because they did not know how much obligation authority they would receive or when the next allotment would arrive. Because of the acute shortage of track shoes, TACOM immediately awarded contracts to Goodyear whenever obligation authority became available. As a result of accelerated deliveries, Goodyear reported that it had a very low level of remaining workload and was in danger of closing down the track shoe production plant unless it received additional contracts. According to TACOM officials, additional obligation authority was aggressively requested throughout fiscal year 2003 to support purchases of track shoes as well as other supply items. As an item critical to mission success, officials stated that track shoes usually receive more funding than other commodities managed by TACOM; however, releases of additional obligation authority were delayed in some instances. TACOM officials stated that when they need additional obligation authority, they request it from AMC, which then requests it from Army headquarters. These requests must be validated and justified (based on past sales) at each level before the Office of the Under Secretary Defense (Comptroller) approves for the release of additional obligation authority. TACOM officials expressed frustration with the process and complained that both AMC and Army Headquarters were not aggressively pursuing the issue and did not fully grasp the magnitude of impact to units in theater because of the lack of obligation authority provided to item managers. To address inaccuracies in war reserve requirements, TACOM has manually updated its requirements levels for track shoes, rather than wait for AMC to implement the next Army War Reserve Automated Process. To overcome inaccuracies in the requirements forecasting model, TACOM depended on item managers’ judgment and expertise to determine demand more accurately. Item managers worked with available information provided by AMC and with input from units in theater. In addition, priority was given to the Iraqi theater of operations and available track shoe production was shipped to support units in Iraq. TACOM worked with theater commanders to expedite and prioritize shipments of track shoes. To address funding shortfalls, TACOM continually requested that any additional obligation authority be made available to buy track shoes during fiscal year 2003. Because track shoes were considered critical mission- essential items and their shortage greatly impacted theater operations, officials from the Track and Roadwheel Group said that they usually received more funding and attention than other supply groups within TACOM. For example, in June 2003, the Track and Roadwheel Group received $64 million that was specifically meant to prevent Goodyear from closing down its track shoe production plant. In August 2003, TACOM received an additional $70 million for track shoe purchases. To improve track shoe availability during OIF, the Army made a $5.2 million investment in Goodyear production facilities to meet the surge requirements and to sustain the viability of the track shoe supplier. At the time of this review, TACOM officials did not identify any long-term efforts to correct problems identified with war reserves, requirements, or funding shortfalls. The U.S. military’s new Interceptor body armor is composed of two primary components: an Outer Tactical Vest and two Small Arms Protective Insert plates (see fig. 9). The Outer Tactical Vest consists of a combination of a soft fabric vest and para-aramid fiber panels that provide protection against shrapnel and 9 mm ammunition. The plates consist of a combination of ceramic tiles and polyethylene fiber that, when inserted into the vest, provide protection against rifle rounds up to 7.62 mm. The vest accepts two plates, one for the front and one for the back. Additional attachments can increase protection. The new body armor provides improved protection and weigh less than the older version—the Personnel Armor System for Ground Troops, or “flak” vest—which protects only against shrapnel. The new body armor weighs 16.4 pounds, while the older vest weighs 25.1 pounds. The Army planned to issue the Interceptor body armor to U.S. forces over an 8-year period between 2000 and 2007. It began to distribute the armor to military personnel during Operation Enduring Freedom. On the basis of the armor’s effectiveness the Army decided to accelerate its fielding for OIF. DLA’s Defense Supply Center Philadelphia manages the Interceptor body armor for the Army. The Marine Corps has its own version of the body armor that is constructed with the same materials as the Army version. The Marine Corps Systems Command and the U.S. Army Robert Morris Acquisition Center manage it for the Marine Corps. Both services rely on the same manufacturers. Interceptor body armor was not available in sufficient quantities to U.S. military forces in Iraq sometime between October 2002 and September 2004. We consider this item to have a shortage because demand exceeded the production output necessary to meet the needs of the war fighter. While there were shortages of Interceptor body armor during OIF, Coalition Forces Land Component Command officials stated that military personnel deployed with either the vest component of the new body armor or the old body armor. CENTCOM officials stated that all personnel in Iraq had the new armor by January 2004; 8 months after combat operations were declared over. The new body armor was initially intended for limited numbers of personnel, such as dismounted infantry, however, this changed during OIF. In May 2003, the Army changed the basis of issue to include every soldier in Iraq. Then in October 2003, CENTCOM further expanded issuance of the body armor to include all U.S. military and DOD civilian personnel working within CENTCOM’s area of responsibility including Iraq, Kuwait, and Afghanistan. Demand for the vest component part of Interceptor body armor increased rapidly both at the beginning of OIF, when troops began combat operations, and again in late 2003 during stabilization operations. Worldwide quarterly demand for vests rose from 8,593 in December 2002 to 77,052 in March 2003—the onset of combat operations. The demand for vests continued to spike upward topping out at 210,783 in December 2003. The number of back orders also rapidly increased over this period of time (see fig. 10). By December 2003 the worldwide number of back orders reached 328,023 with DLA mandating that OIF requisitions receive priority. In contrast, during December 2003 the number of vests actually produced to meet demand was only 23,900. Similarly, the demand for plates increased with the onset of combat operations and again during stabilization operations in late 2003. The demand for plates increased more than ten-fold, from a quarterly demand of 9,586 plates in December 2002 to a quarterly demand of 108,808 plates in March 2003 at the beginning of OIF. As figure 11 shows, with the change in the basis of issue for the Interceptor body armor in October 2003, the demand for plates rose rapidly again, peaking at 478,541 plates in December 2003. In addition, during late 2003, the number of back orders for plates also increased rapidly. By November 2003, the number of worldwide back orders peaked at 597,739 plates, with DLA giving OIF requisitions priority. In contrast, during this month, production output totaled only 40,495 plates. Military officials expressed serious concerns over the shortage of Interceptor body armor. Army officials stated that soldiers’ morale declined as units waited for the armor to reach theater. Because of the shortages, CENTCOM officials stated they prioritized the issue of the new body armor to those who were most vulnerable. In addition, there was a lack of body armor among support personnel, such as the Army’s 377th Theater Support Command, while insurgents were attacking and interdicting supply routes in Iraq. Because of the shortages, many individuals bought body armor with personal funds. The Congressional Budget Office estimated (1) that as many as 10,000 personnel purchased vests, (2) as many as 20,000 purchased plates with personal funds, and (3) the total cost to reimburse them would be $16 million in 2005. Temporary shortages of the Interceptor body armor occurred because of acquisition delays related to lack of key materials and distribution problems in theater. A lack of sufficient quantities of key materials used to make vests and plates delayed acquisition to meet the increasing demand for Interceptor body armor. According to DLA officials, shortages of critical materials still limit worldwide production to approximately 35,000 vests and 50,000 plates per month. A production lead-time of three months has also limited the industrial-base’s capacity to accelerate its production levels to meet increasing demand. DLA officials stated that the production of vests and plates was impaired by a limited availability of critical materials. The shortfall of vests was due to a lack of Kevlar, a para-aramid fiber that was in short supply. DuPont Chemicals is the only domestic producer of the para-aramid fiber panels used in the vests. However, an exception under the Defense Federal Acquisition Regulation Supplement allowed vest contractors to use Twaron fiber panels manufactured in the Netherlands as a replacement for Kevlar fiber panels. The shortfall in ceramic plates was due to insufficient quantities of two materials needed to produce them. The initial shortfall was due to the limited availability of SpectraShield. Until April 2004, Honeywell was the only domestic producer of SpectraShield, and it had other competing commercial requirements for the material. Plate producers responded to the limited availability of SpectraShield by manufacturing modified plates that replaced SpectraShield with Kevlar and other para-aramid fiber materials. While these plates met ballistic protection requirements, they weighed a half pound more and required a service waiver for acceptance. In April 2004, DSM Dyneema, a foreign firm that produces Dyneema, a SpectraShield-equivalent, opened a production facility in the U.S. and began to produce and sell the product. SpectraShield and Dyneema are the only materials that meet the services’ ballistic protection and weight requirements for Interceptor body armor. Due to their limited availability, both materials are under Defense Priorities and Allocation System control. Plate production was later constrained by the limited availability of ceramic tiles. According to DLA, current production output is subject to further increase as DSM Dyneema increases its Dyneema production and additional ceramic tiles are qualified as meeting specification requirements. Attempts to accelerate fielding of the new body armor met with some success, but also caused problems. According to an Army Office of the Inspector General report, accelerated fielding resulted in supplying body armor to soldiers at a much faster pace than normal. The armor was distributed in some cases virtually directly from the factory to the warfighter. The report stated that the accelerated fielding did not allow time for the project manager to coordinate with units and allow them to establish sufficient accountability in theater, as required by Army regulations. However, lack of in-transit visibility, inaccurate reporting of on-hand quantities, lag time in recording receipt of plates, and other accounting errors resulted in temporary loss of visibility of between 5,000 and 30,000 sets of plates. To meet the surging demand for plates, DOD used authority under the Defense Production Act to allocate production of SpectraShield. More specifically, the Office of the Deputy Under Secretary of Defense (Industrial Policy) used its authority under the Defense Priorities and Allocation System to direct Honeywell to accelerate deliveries of SpectraShield in support of OIF on six occasions in 2003. According to the Acting Under Secretary of Defense (Acquisition, Technology, and Logistics), Honeywell did this at the expense of its commercial orders. To increase industrial-base production capacity, DLA stated that it increased its number of vest suppliers from 1 to 4; plate manufacturers from 3 to 8 (including manufacturers of overweight plates); and ceramic tile suppliers from 4 to 10 (including suppliers of overweight tiles). DLA has recommended that it have management of Interceptor body armor requirements for all of the services. It has recommended that the services establish war reserve stock levels for the new body armor to mitigate lead-time in industrial-base production. It has also requested the authority to purchase and maintain an inventory of materials necessary for producing vests and plates as well as contract with vendors who have the capacity to use such stored materials during times of high demand. JSLIST is a protective clothing ensemble that includes a lightweight chemical-biological protective suit, multi-purpose over-boots, and gloves (see fig. 12). When combined with a chemical protective mask, JSLIST provides protection from chemical and biological agents. The suit can be worn over a uniform and body armor. Once it is removed from the packages, the suit can provide protection for 45 continuous days. However, once exposed to an agent, it must be replaced within 24 hours. The sealed suit package has a shelf life of 14 years. The U.S. military began fielding JSLIST in November 2002. Before then, the Army relied on the Battle Dress Overgarment and the Marine Corps depended on the Saratoga suit. Although the military services manage their own inventories of JSLIST suits, DLA serves as the contracting agent. The largest producer is the National Institute for the Severely Handicapped. The primary subcontractor, Blücher, a German company, makes the suit’s filter fabric liner. A critical component of the liner is the carbon beads, which absorb chemical and biological agents. The carbon beads are produced for Blücher—through a sole source contract—by a Japanese company, Kureha. JSLIST was perceived to not be available for the warfighter between October 2002 and September 2004; however, we do not consider this a shortage because all personnel in theater were issued a JSLIST or Saratoga suit and the required spare by February 22, 2003. Some Army officials in theater, as well as National Guard officials in the U.S., indicated a shortage of JSLIST; however our analysis indicated no actual shortage. Despite this perception of a shortage, neither the Army nor Marine Corps indicated any impact on operational capabilities of deployed units. The Army began to field additional JSLIST suits to units deployed in theater in November 2002, in response to a congressional request. By February 22, 2003, the Army’s Central Command reported that every Army unit had two suits for every soldier in theater; moreover, the theater supply base had one spare suit for every soldier. Similarly, the Marine Corps reported it had sufficient stock during OIF to issue one Saratoga suit and hold two additional suits for each Marine. The perception of a JSLIST shortage emerged in late 2002 and early 2003 because of a change in requirements, poor asset visibility, and concerns about production capacity. Changes in requirements increased the demand for JSLIST. Before October 2002, the Army’s and Marine Corps’ requirements called for one chemical-biological protective suit and one backup for each soldier or marine in theater. To meet this requirement, the services planned to use the older suits (e.g., the Army’s Battle Dress Overgarment and the Marine’s Saratoga suit) and eventually supplemented them with the newer JSLIST. In October 2002, however, the House Committee on Government Reform requested that DOD direct the services to issue JSLIST to all U.S. forces stationed in the Middle East, thereby increasing the servicewide demand for JSLIST. According to Marine Corps officials, this request expanded the number of personnel who needed suits to include not only DOD military personnel but also DOD civilian contractors and members of other external organizations. Although Marine Corps officials state they planned to provide protective suits for non-military personnel before the congressional request, and had acquired and stocked 400 sets of Saratoga suits for this eventuality, they found more personnel than expected who needed the protective gear. Although the services were required to provide suits for all personnel in theater, there was no DOD policy to guide the procurement of these items. Although the availability of JSLIST was sufficient, the sizes of the available suits were a problem for some soldiers. Initial orders for JSLIST did not take into account the fact that the suits would be worn over body armor and, thus, larger sizes were needed. According to DLA officials, units also did not consider that some National Guardsmen and reservists would need larger suits than those typically stocked to support the active-duty forces. In some cases, the poor visibility over National Guard and Army Reserve supply inventories affected the perceived nonavailability of JSLIST. For example, Army officials noted that some National Guard and reserve units could not promptly find a sufficient number of JSLIST in their inventory to meet requirements. Their inventory systems did not provide visibility over inventory in different locations. As a result, Army officials said the deployment of National Guard and reservist personnel was delayed until a sufficient number of JSLIST were located within their inventory. Although DLA reported operational unit concerns about the production of carbon beads, the agency was able to meet suit demand during OIF. Because of the single source subcontractor’s limited ability to produce carbon beads, the total monthly production was limited to 70,000 to 80,000 suits. DLA officials stated this level of production was sufficient to meet JSLIST requirements and prior GAO analysis supports their claim. However, DLA officials noted they are concerned about their ability to meet the services’ current requirement to replace the 400,000 to 500,000 suits issued in OIF. To meet the additional requirement to supply DOD civilian contractors and other non-military personnel with JSLIST, Army officials noted that suits were shipped directly to the theater for issue. In addition, Marine Corps officials reported drawing on their prepositioned war reserve stocks to meet the additional requirement. To meet the demand for larger sizes, the Director of DLA testified that DLA provided 2,000 custom-made suits for personnel outside the original size range. Moreover, Army officials said that Federal Express was used to expedite the shipment of these suits. To meet suit requirements for all personnel in theater, Army officials report that DLA has introduced four larger suit sizes into its inventory. As part of an effort to improve asset visibility, the Department of the Army has implemented an Individual Protective Equipment Centralized Management Initiative designed to provide units with visibility and shelf-life management of inventory in the United States. To increase production capability, DLA officials stated that they worked in conjunction with DOD to increase the production capability of the existing industrial base and to develop new protective suits for future use. For example, DLA officials stated that Blücher is conducting research to develop an alternative carbon bead in order to reduce reliance on a sole source producer. In addition, they announced that in June 2004, the JSLIST Additional Source Qualification program at Quantico, Virginia, accepted the use of a new bead from Blücher, which will be available for future suits. BA-5590 and BA-5390 nonrechargeable lithium batteries provide a portable power source for nearly 60 critical military communication and electronic systems, including the Single Channel Ground and Airborne Radio System, the Javelin missile guidance system, and the KY-57 transmission security device. U.S. troops depend on these systems to communicate, acquire targets, and gain situational awareness on the battlefield. The BA-5590 was developed specifically for military use more than a decade ago and, according to military officials, is the most widely used communications battery in the supply system (see fig. 13). The BA-5390 served as a substitute battery when shortages of BA-5590s occurred during OIF. Prior to the start of Operation Enduring Freedom, the Army was moving to a rechargeable battery at the direction of the Environmental Protection Agency. Funding was provided for the environmentally safe battery, not the disposable lithium battery. However, these disposable batteries are well- adapted to fast-paced mobile operations because they do not have to be recharged. the military and in early 2003 Eagle-Picher Technologies began delivering BA-5590s to augment SAFT’s output. Before and during OIF, CECOM’s Logistics and Readiness Center bought and managed DOD’s family of lithium batteries. As of September 30, 2004, this responsibility was transferred to DLA’s Defense Supply Center Richmond. CECOM, however, will continue to be responsible for technical issues related to lithium batteries. During the time period covered by our review, CECOM used several methods to derive inventory management data. Before July 2003, CECOM used the old Commodity Command Standard System. We consider data derived from this legacy system to be sufficiently reliable for our purposes. In July 2003, CECOM converted to a new database, the Logistics Modernization Program, which encountered stabilization and data clean-up issues. To overcome these issues, CECOM item managers obtained inventory management information from the Logistics Modernization Program as well as manual computations. While these data are sufficiently reliable for the purpose of showing trends and graphs, our findings rely primarily on data from the Commodity Command Standard System from the period before July 2003. Nonrechargeable lithium batteries, specifically BA-5590s and BA-5390s, were available in limited quantities to the warfighter between October 2002 and September 2004. We consider this to be a shortage because the monthly demand and back orders for these batteries exceeded the monthly inventory that CECOM had available to supply U.S. forces in OIF. While demand for nonrechargeable lithium batteries increased dramatically after September 11, 2001, it quickly outpaced the available supply, as U.S. troops began preparing for combat operations in Iraq. Demand rose from a peacetime average of below 20,000 batteries per month before September 2001 to an average of 38,313 batteries per month after the United States launched the global war on terrorism and Operation Enduring Freedom in Afghanistan. In January 2003, as thousands of troops were deploying to the Gulf region, the number of batteries requisitioned surged to 140,000 and, in April 2003 during major combat operations, the number peaked at 330,600 (see fig. 14). When major combat operations were declared over in May 2003, demand began to fall. Since the fall of 2003, the demand has leveled off to an average of about 62,000 per month. U.S. troops encountered severe shortages of nonrechargeable lithium batteries because inventory levels (including on-hand and war reserve stocks) were low. As figure 14 shows, inventory levels remained on average below 15,000 batteries during 2002 and into early 2003, increasing only in May 2003 after major combat operations were declared over and demand began to decline. At the same time, the number of back-ordered batteries grew to about 250,000 in January 2003 and, by May 2003, had nearly quadrupled to 900,000. As demand fell and requisitions were filled, the number of back orders began to drop in the summer of 2003 and, by the end of 2003, inventory levels exceeded back orders. Army and Marine Corps units faced critically low supplies of BA-5590s and BA-5390s during the spring of 2003. On March 24, 2003, a few days after combat operations began, the Marines reported they were down to less than a 2-day supply (rather than the required 30-day, on-hand safety level). In early April, Marine officials projected that, given existing worldwide inventories, production capacity, and consumption rates, they would experience degraded communications capacity by early May if the war continued at the same pace. To mitigate the shortages, the military took some actions, including requiring stationary units to use alternative power sources (e.g., rechargeable batteries) and instituting a weekly Materiel Priority Allocation Board meeting to apportion batteries to combat units that needed them the most. The critical shortages of BA-5590s and BA-5390s during OIF resulted from four related conditions: inadequate war reserve requirements, inaccurate forecasted requirements, lack of full-funding, and acquisition delays due to industrial-base limitations. The Army’s war reserve requirements for nonrechargeable lithium batteries were not sufficient to support initial operations. According to Army Regulation 710-1, the war reserve is intended to provide the Army with interim support to sustain operations until it can be resupplied with material from the industrial base. According to CECOM officials, before OIF, the war reserve requirement for BA-5590s was set at about 180,000 batteries to sustain the first 45 days of war. However, this amount was considerably below the actual demand of nearly 620,000 batteries recorded during March and April 2003. Officials stated that the low pre-war requirement was generated by the Army’s war reserve model, which was last updated in 1999; moreover, this model used inaccurate battery failure rates and did not include all of the equipment that used nonrechargeable lithium batteries. Based on their experience during OIF, CECOM officials have increased the current Army and Marine Corps war reserve requirement for BA-5590s and BA-5390s to more than 1.5 million total batteries, an amount equal to OIF’s average monthly demand of 250,000 batteries times 6 months of continuous combat operations. War reserve planners expected inventories to reach the 1.5 million mark by February 2005. In addition to low war reserves, CECOM’s official monthly forecasted requirements for nonrechargeable lithium batteries were far below those needed to meet a wartime contingency. Forecasted requirements are developed primarily on the basis of actual demand data for an item from the preceding months and are used to support funding requests to purchase additional supplies. In 2002, CECOM increased its monthly forecasted requirements from a monthly peacetime norm of 24,000 batteries to a monthly average of 36,000 in response to the global war on terrorism (see fig. 15). These monthly requirements grew to nearly 60,000 in March 2003 when combat operations began in Iraq, but this number was only one-fifth of the actual demand recorded that spring. Forecasted requirements continued to lag behind demand until mid-summer when they caught up. According to officials from Central Command Joint Logistics, the pre-OIF monthly requirement figures were low because some combatant commanders did not submit their requirements and estimates did not reflect all battery usage; as a result, officials said that calculating requirements was purely guesswork. In the summer of 2002, CECOM and AMC officials developed a more realistic contingency requirement for nonrechargeable lithium batteries. Using information from the Operation Plan and other sources, they forecasted a need for 300,000 to 325,000 batteries per month. As figure 15 shows, this estimate closely paralleled the actual demand of 330,000 at the height of major combat operations in April 2003. CECOM officials presented this requirement to AMC and the Joint Materiel Priorities and Allocation Board in the fall of 2002 to bolster their request for $38.2 million of additional obligation authority to ramp up BA-5590 and BA-5390 production. They received the funding in early December 2002. The Army’s risk-based decision not to fund full requirements for CECOM, particularly lithium nonrechargeable batteries, during several years prior to OIF compounded the shortage problem. As table 9 shows, CECOM had unfunded requirements ranging from $85 million to $419 million for the 3 fiscal years up to and including OIF. In fiscal year 2003, for example, CECOM identified requirements for the command of nearly $1.5 billion, but received less than $1.1 billion in obligation authority for the year, resulting in an unfunded requirement of $419 million, or more than 28 percent of the total amount required. The command’s unfunded requirements specifically for BA-5590 lithium batteries varied from a high of $4.2 million in fiscal year 2001 to a low of $1.2 million in fiscal year 2002. However, the low figure for fiscal year 2002 occurred because AMC directed CECOM to spend $11.5 million to specifically support its BA-5590 requirement. Our analysis shows that even if CECOM had been able to fund 100 percent of its BA-5590 battery requirement in fiscal year 2002, it would not have been able to meet the growing demands from the global war on terrorism. A fully funded requirement ($22.6 million) would have provided about 33,000 batteries per month, and actual demand exceeded that for most of the year. The surge in demand for nonrechargeable lithium batteries exceeded the amount that the industrial base could produce, thereby delaying acquisition. Before OIF, CECOM had contracted with only one qualified producer, SAFT, to make BA-5590s. To support the global war on terrorism, SAFT doubled its production from 32,000 batteries per month in October 2001 to 60,000 per month in September 2002. After receiving $38.2 million in additional obligation authority in December 2002, CECOM increased its orders for BA-5590s with SAFT and added BA-5590s to its contract with Eagle-Picher. It also contracted with Ultralife to make a substitute battery, the BA-5390. According to CECOM officials, both batteries have a 6-month production lead time. Despite CECOM’s efforts, the long lead-time precluded the ability of these three producers to meet the surge in demand during major combat operations. Army officials stated that if they had received funding earlier they would have been able to mitigate the effects of this long lead time. As figure 15 shows, while production output increased to over 100,000 batteries per month in the spring of 2003, it did not approach 200,000 until the late summer of 2003 or reach its peak of 250,000 until early in 2004. A recent study identified a limited industrial base as the primary cause of the BA-5590 battery shortage. A March 2004 Science Applications International Corporation report concluded that battery shortages and lack of availability were an industrial-base challenge. The supplier was not able to increase production to meet the unforecasted six-fold increase in demand. To overcome production constraints, CECOM negotiated with two other producers, in addition to SAFT, to manufacture BA-5590s and BA-5390s. It also worked with the three producers to augment battery production by going to a 24/7 schedule. In addition to expedite shipments, CECOM had SAFT bypass the depot and ship batteries directly to Charleston Air Force Base for air shipment to the theater. According to CECOM, a capital investment of $5 million was made in the three producers in May 2003 to expand their production capacity. DOD took a number of actions to get the limited supply of nonrechargeable batteries to units that needed them most. According to CENTCOM, the Joint Chiefs of Staff issued a directive to send all available BA-5590s and BA-5390s to CENTCOM’s area of responsibility until June 2003. The Joint Staff also put these batteries on the “critical few list,” which focused attention on improving the availability of specific items the services and geographic combatant commands reported as critical to their worldwide operations. CECOM and Marine Corps officials said they shifted available batteries from military installations worldwide and also bought batteries on the commercial market. The Army, Marines, and Coalition Forces Land Component Command also directed troops, especially those in rear units, to use rechargeable batteries when possible. In addition, the Army required soldiers to use rechargeable batteries for garrison duty and training and to maximize their use during peacekeeping operations. Marine combat units were instructed to do everything possible to reduce nonrechargeable battery consumption rates. Moreover, Coalition Forces Land Component Command was appointed the theater’s item manager for batteries, with responsibility for prioritizing and releasing batteries to units. To correct problems with war reserve requirements, CECOM officials said they set the current war reserve requirement for BA-5590s and BA-5390s to more than 1.5 million batteries to better reflect the experiences in OIF. This requirement was expected to be filled by February 2005. To improve battery availability, the Deputy Under Secretary of Defense for Logistics and Materiel Readiness, in January 2004, directed the transfer of battery inventory management from CECOM to DLA as of September 30, 2004. In terms of technological efforts, CECOM officials said they are developing newer, lighter-weight rechargeable batteries that could be powered by solar panels or other energy sources while troops are on the move to reduce dependence on disposable batteries. During Operation Iraqi Freedom, the Marine Corps relied on a variety of helicopters to support its forces during combat operations. These include the UH-1N Huey, a twin-engine utility helicopter used in command and control, re-supply, casualty evacuation, liaison, and troop transport, and the CH-53E Super Stallion, a triple-engine cargo helicopter used to transport heavy equipment and supplies. Both types of helicopters require numerous spare parts, including rotor blades, to maintain their operational status (see fig. 16). In Iraq, Marines reported that enemy fire and harsh environmental conditions, such as heat, sand, and unimproved airfields, increased the wear and tear on the rotor blades. All Marine Corps helicopter spare part supplies, including rotor blades, are managed by the Naval Inventory Control Point Philadelphia. There were no shortages of rotor blades between October 2002 and September 2004, although there were indications of concern due to increased wear and tear caused by operating from unimproved airfields, the harsh environment, and back orders. We do not consider this a shortage because the supply system filled back orders within 2 months and Marine Corps officials from the 3rd Marine Air Wing reported no major supply shortages of rotor blades for the UH-1N and CH-53E helicopters during OIF. The supply system was able to provide a sufficient replacement quantity of UH-1N and CH-53E rotor blades despite increased demands. For example, the Marine Corps took the forecasted 16 UH-1N helicopter rotor blades, to Iraq. As figure 16 shows, from March 2003 through August 2004, the Marines requisitioned 22 additional rotor blades to support their mission, and the supply system met those demands by filling orders within 2 months of receiving the order. In addition, air wings from outside the theater supported the demand in Iraq by providing rotor blades from various air stations, ship supply, and Marine Aviation Logistics squadrons. As a result, the Marines were able to maintain a mission capable rate for the UH-1N of 75.4 percent during OIF, compared with a peacetime rate of 79.9 percent in 2000. To date, the Naval Inventory Control Point Philadelphia continues to meet UH-1N rotor blade demands for OIF. The 3rd Marine Air Wing took 33 of the forecasted requirement of 64 rotor blades to support CH-53E helicopters in Iraq. As figure 18 shows, 55 additional rotor blades were ordered through the supply system, with 14 on back order from March 2003 through September 2004. The supply system met those demands by filling orders within 1 month of receiving the order. As a result, the Marines were able to maintain a mission capable rate for the CH-53E helicopter of 67.5 percent during combat operations, compared with a peacetime rate of 72.3 percent in 2000. Marine Corps officials stated that there were no shortages of rotor blades for UH-1N and CH-53E helicopters and our analysis of the 3rd Air Wing’s demand and the supply system’s ability to promptly provide rotor blades during OIF supports their assertion. Even though they were able to get enough rotor blades from the supply system to meet their demands, Marines took a number of actions during OIF to extend the life of rotor blades in theater. Marines improved the durability of CH-53E rotor blades and other bladed helicopter parts by coating them with titanium paint and a tape covering in order to protect the leading edge of the blade from sand erosion. In addition, as the pace of combat operations slowed, Marines built permanent airfields with paved landing areas, which decreased blade erosion during take-off and landing. The Marine Corps and Naval Inventory Control Point Philadelphia attribute their ability to provide rotor blades to using models to determine numbers and timing of spare parts and upgrades to sustain helicopter operations. The Marine Corps and the Naval Inventory Control Point Philadelphia maintain a 5-year old system, the Common Rate Calculation System/Common Application Development System, which uses 4-year historical demand data for the entire aircraft community for particular helicopters, engineering data and worldwide environmental factors to produce more accurate demand projections. The standard military ration for the individual combatant is a prepackaged, self-contained ration known as a MRE (see fig. 19). A MRE consists of 1,300 calories per bag and is designed to sustain an individual engaged in heavy activity, such as military training or actual military operations, when normal food service facilities are not available. MREs are issued in cases of 12 and MREs have a shelf life of 3 years when stored at 80°F. DLA’s Defense Supply Center Philadelphia manages the MRE supply for all services. It has supplied a total of 5.1 million MRE cases for OIF. MREs were not available to the warfighter at some point between October 2002 and September 2004. We consider this item to have a shortage because demand exceeded the amount available to meet the needs of the warfighters. As figure 20 indicates, as the demand for MREs in OIF grew between December 2002 and March 2003, the worldwide inventory declined. A shortage of MREs began in February 2003 and continued into March 2003 when monthly demand peaked at 1,810,800 cases, although only 500,000 cases were available in the inventory. Figure 20 also shows the production output of MREs increased from December 2002 through April 2003. As a result of DLA’s actions to maintain an industrial base capable of a large surge in production, the industrial base was able to increase its monthly production of MREs. Consequently, DLA never reported any back orders for MREs during OIF. In late April 2003, as U.S. forces transitioned from MREs to other food consumption options, monthly demand decreased significantly to 650,000 MRE cases. That month, the industrial base produced 1.3 million cases. From May 2003 on, a sufficient quantity of MREs were available in inventory to meet demand. Army and Marine Corps units did not always have all the MREs they needed. According to CENTCOM after-action reports, Army combat units were supposed to arrive in theater with a 7 to 10 day supply of MREs. However, CENTCOM reported that many units did not arrive with this quantity, thereby placing a strain on the in-theater inventory. An analysis of Army logistics reports by the RAND Corporation indicated that some units came within 2 days or less of exhausting on-hand quantities. According to the 2nd Force Service Support Group, Marine Corps combat units averaged a 6- to 8-day supply throughout the war, but there were times when some forces had less than 1-day on-hand supply. Marine Corps’ Combat Service Support Companies, which directly support combat units, also reported critical shortages of MREs. According to the 1st Force Service Support Group, direct support units were supposed to maintain a 2-day supply. However, according to a study by the Center for Naval Analysis, there were times in late March and mid-April 2003 when direct support units had less than a 1-day supply. Problems with requirements planning, the release of Operations and Maintenance funding, and distribution contributed directly to shortages of MREs in theater. DLA’s forecasted requirements did not support MRE customer demand for the first month of combat operations because of rapid changes in the size of troop deployments. DLA’s forecasted requirement for March 2003 was 996,556 cases of MREs; this number fell short of meeting the customer demand of 1,810,800 cases. The March 2003 forecasted requirement did not include data that anticipated initial in-theater personnel levels would be doubled because of a faster deployment of certain units. In a lesson-learned report, CENTCOM stated that the forecasted MRE requirement for the period of deployment was predicated on a 30-day supply for 50,000 personnel. This forecast was quickly exceeded by the deployment of 100,000 personnel during that 30-day period. The resulting demand placed a strain on existing in-theater MRE inventories. However, DLA’s model provided accurate planning estimates for MRE customer demand for all other months. The Army experienced a delay in the release of operations and maintenance funding for MREs, despite DOD requirements that supply chain processes provide timely support during crises. Although the Army wanted to submit MRE requisitions to DLA in September 2002, it could not do so because it lacked the Operations & Maintenance funding necessary to purchase them. When the Army submitted the requisitions in December 2002, DLA shipped MREs to Kuwait. However, this 4-month delay in funding contributed to the shortage of MREs by delaying shipments of MREs into the theater. The Marine Corps faced a similar funding problem that delayed the processing of ration requests for OIF. As reported in a Marine Corps lessons-learned report, a January 6, 2003, request for a withdrawal of rations from the war reserve was delayed due to lack of available operations and maintenance funding from Headquarters Marine Corps. The Marine Corps provided notification of partial funding and the Marine Corps’ first request for rations was passed to DLA on January 16, 2003. Funding was available to provide for the remainder of the requirement and funded requisitions were passed to DLA on February 10, 2003, 5 days before the Marines’ required delivery date of February 15, 2003. A number of distribution problems in the logistics supply chain hampered MRE availability. One problem was that actual MRE delivery times exceeded the forecasted delivery times. Most MREs were transported by ship from the U.S. to a seaport of debarkation in theater and then by ground transportation to combat units. CENTCOM officials estimated it would take 30 to 45 days to transport MREs from the United States to a warehouse in theater. However, they stated that the actual total time to move these rations averaged 49 days: 31 days for transit to the theater, 3 days to gain a berth at port, 5 days to discharge supplies, and 10 days for movement from the port to the theater warehouse. Officials also noted that there were times when it took as long as 60 days to transport MREs from the United States to Kuwaiti ports because multiple, rather than single, vessels were used in the transport process—a factor that initial delivery time estimates did not take into account. The lack of sufficient materiel handling equipment and transportation assets in theater up to and during combat operations caused delays in unloading supplies from ships and transporting them to combat units. Because of the lack of adequate handling equipment, logistics personnel could not efficiently unload the large shipments of MREs arriving at ports in Kuwait, resulting in a backlog of ships waiting to be unloaded. DLA officials stated that, at one point in time, 1.4 million MREs were sitting at a port in theater, waiting to be processed. In addition, there were insufficient transportation assets to move MREs from ports to theater distribution warehouses. In particular, local contractors responsible for delivering rations did not have sufficient trucks to make regular deliveries to theater distribution warehouses. In addition, there were insufficient materiel handling equipment and transportation assets to move MREs from storage locations to combat units. For example, according to one OIF after-action report there were times when 80 trucks were needed to move rations forward but only 50 were available. Poor in-transit visibility also delayed distribution of MRE shipments in several ways. CENTCOM officials stated that logistics personnel could not always rely on radio frequency identification device technology to account for shipments. Despite a CENTCOM requirement that radio frequency identification device tags be used for all shipments to theater, CENTCOM estimated initial use was only about 30 percent. Among other problems experienced were the failure to attach tags to all containers and a lack of sufficient tracking devices to read tags in order to identify subsistence items stored in containers. As a result, logistics personnel stated they had to manually review all packing documents to identify the contents of containers, thereby slowing down the distribution of supplies. Because of poor tracking, sufficient supplies of MREs sometimes existed but were not visible. For example, during the MRE shortage, a DOD official found over 17 20-foot containers with MREs at a supply base located halfway to Baghdad; the MREs were there for a week because no one knew they were there. To reduce MRE consumption during the shortage, Army and Marine Corps officials stated that units switched to alternate feeding methods such as Unitized Group Rations. CENTCOM reported working with various carriers and the (Military) Surface Deployment and Distribution Command to use sustainment packages weeks ahead of their scheduled issue dates. To improve the distribution of MREs, military officials formed a joint working group including members from DLA, the Coalition Forces Land Component Command, CENTCOM, and U.S. Transportation Command. This group communicates regularly to improve in-transit visibility over rations. CENTCOM officials stated that due to the lateness of ships arriving in theater, DLA located additional rations in other theaters that were shipped to OIF. To ensure timely visibility of anticipated requirements, DLA has recommended that collaboration between it, the Combatant Commands, and the services be enhanced. To improve the timeliness of funding, DLA is working with the services to refine their plans for releasing funding early in the deployment process. To deal with distribution problems in theater, the Secretary of Defense in September 2003 designated the U.S. Transportation Command as the Distribution Process Owner. The Transportation Command established a Deployment and Distribution Operations Center in January 2004. The center is responsible for improving the distribution process within theater by directing airport, seaport, and land transportation operations. The U.S. Army depends on a variety of trucks and other vehicles to support combat operations. During OIF, it relied on the 5-ton capacity cargo truck to transport all types of supplies and on the HMMWV to carry troops and armaments, as well as to serve as an ambulance and scout vehicle. The 5-ton truck (fig. 21) is outfitted with six radial tires and the HMMWV with four radial tires. The tires are specific to each type of vehicle and are not interchangeable. The Army’s TACOM Tire Group manages the tire inventory for wheeled vehicles, including the 5-ton truck and the HMMWV, for U.S. forces worldwide. These tires are produced for the military by several manufacturers, including Goodyear and Michelin. Tires for the 5-ton truck and the HMMWV were not available to the warfighter at some time between October 2002 and September 2004. We consider this item to have a shortage because demand exceeded the amount of inventory available to meet the needs of the warfighters. U.S. forces and logistics personnel reported critical shortages of 5-ton truck and HMMWV tires during OIF that negatively impacted their mission. According to TACOM officials, the increased pace of the operations resulted in high-vehicle mileage that caused significant wear and tear on these tires. Prior to the onset of OIF in March 2003, TACOM had no back orders for 5-ton truck tires and reported it was able to support demands from customers worldwide. However, as figure 22 shows, back orders started to accumulate after OIF began and, by October 2003, the number had peaked at 7,063 tires per month. Similarly, worldwide demand for tires rose after March 2003. As figure 22 indicates, this demand increased fourfold over the course of 1 year, climbing from a peacetime level of 1,189 tires in April 2002 to a wartime level of 4,800 tires in April 2003. While demand remained high during the summer of 2003, inventory levels dropped to below 1,000 and were insufficient to meet customer needs. For example, in August 2003 when demand reached 4,828 tires, TACOM recorded only 505 tires in its inventory worldwide. According to TACOM officials, demands from OIF received priority and much of the available inventory supported operations in Iraq. TACOM reported no back orders for HMMWV tires prior to OIF. However, as figure 23 shows, back orders began to increase in April 2003 and peaked at 13,778 tires in September 2003 as demand increased and industry took several months to respond. According to TACOM officials, back orders accumulated because of the increasing demands coming from OIF. Worldwide demand rose rapidly in June 2003, peaked at 16,977 tires in August 2003, and gradually declined during the winter months (see fig. 23). Over the span of 1 year, worldwide demand increased more than four-fold, from a peacetime rate of 3,251 tires per month in June 2002 to 15,224 tires per month in June 2003. While demand grew during the summer of 2003, inventory levels were insufficient to meet customer needs. For example, in July 2003, TACOM recorded only 4,286 HMMWV tires in its inventory, but had demands for a total of 14,435 tires. Fluctuating demands were caused by the intensity of the war fight and the changing mixture of weapons systems employed. Army and Marine Corps units reported that tire shortages negatively affected operations in Iraq. Units of the 3rd Infantry Division reported that they could not get the required number of tires to support their mission and that the shortage of tires forced them to leave vehicles and supplies behind. In addition, TACOM reported in June 2003 that it could only provide 64 percent of the spare parts, including tires that the 4th Infantry Division considered urgent. Although the 4th Infantry Division reported shortages in theater, it did not report any mission impact due to tire shortages. In an after-action report, the U.S. Marine Corps documented that cannibalization, stripping, and abandoning otherwise good vehicles occurred because of the lack of spare tires. Problems with war reserve stocks, forecasted requirements planning, funding, and distribution contributed to shortages of the 5-ton and HMMWV radial tires during OIF. The number of tires in war reserve stocks was not sufficient to support customer demands when OIF began. According to Army regulations, war reserve stocks are intended to meet the initial increase in demand during wartime and to fill the gap until the national supply system can increase production. In December 2002, TACOM officials managing war reserves established a requirement for 259 tires for 5-ton trucks. However, officials had only 38 tires on hand at that time, and 3 months later in March 2003, they had only 16 tires on hand. As of October 2004, the war reserve requirement for the 5-ton truck tire remained at 259 tires, but there were only 2 tires in the inventory. As figure 22 shows, the demand for 5-ton truck tires was always higher than 259 tires, starting with 978 tires in February 2002 and continuing throughout OIF. Therefore, the war reserve requirement of 259 tires was too low to support initial demands from units in theater. For HMMWV radial tires, TACOM managers had a sufficient number of tires to meet the war reserve requirement of 1,505 tires in December 2002. In March 2003, managers increased the HMMWV tire war reserve requirement to 7,908 tires, but they failed to adequately stock tires in the inventory. At that time, they only had 1,483 tires on hand. As of October 2004, the war reserve requirement for HMMWV tires remained at 7,908 tires, but there were only 3,764 tires in the inventory. TACOM officials told us that they do not adequately stock tires in the war reserves because they lack the necessary funding. This was the result of risk based decisions about how to allocate DOD funds. As of October 2004, TACOM’s war reserve requirements for all items it manages (including tires) totaled $1,355.7 million. However, it has received only $828.9 million to support those requirements. As a result, TACOM officials have used a risk management approach to prioritize the funding of their requirements. For example, they gave funding priority to more expensive items, such as tank engines, which have long lead-times and are difficult to procure, rather than to less expensive items, such as tires, which can be produced faster. When OIF began, tires stocked in war reserves were inadequate to support initial customer demands because of these decisions. TACOM’s forecasted requirements for vehicle tires underestimated the actual demand for tires during fiscal year 2003. For example, TACOM forecasted that worldwide requirements, for the 5-ton truck tire would reach 1,497 tires per month in April 2003; however, the actual demand for this tire rose to 4,800 for that month, more than three times higher than the forecasted requirements. Similarly, TACOM forecasted that customers would need 5,800 HMMWV tires per month in June 2003; instead, actual worldwide demand for HMMWV tires grew to 15,224 per month, three times higher than the forecasted amount. In June 2003, TACOM changed its requirements forecasting model for tires and other spare parts from a 12-month average demand base to a 30-day average demand base to respond to the sharp increase in actual demand. According to TACOM officials, the 12-month average demand base model did not react quickly enough to actual demands, which were at times three or four times higher than the monthly forecasted requirements. By changing the model to a 30-day average demand base, TACOM was able to stock up on inventory faster. In setting forecasted requirements for tires, TACOM officials stated they relied heavily on past historical demand data because it received little guidance on the expected demand activities or operational plans from Army headquarters. TACOM expected an increase in demand for fiscal year 2003 because of the growing demand from southwest Asia, especially Kuwait, prior to the onset of OIF. Officials from TACOM’s Tire Group told us they put an emphasis on past historical demand data to forecast their future requirements. Similarly, TACOM’s Track and Roadwheel Group reported that they relied on historical data, including information from Operation Desert Storm/Shield and operations in Bosnia, to help them forecast future requirements in the absence of official guidance. According to TACOM officials, the Tire Group did not receive adequate funding (referred to as obligation authority) from the Department of the Army’s working capital fund to buy additional tires to meet customers’ needs. Furthermore, when obligation authority became available, they did not receive it promptly. In fiscal year 2003, TACOM had worldwide demands for tires totaling $246.3 million; however, it received only $212 million in obligation authority, about 86 percent of its total requirements. By comparison, during the same fiscal year, TACOM received about $118.5 million worth of requisitions for all tires needed in OIF. As TACOM exhausted its obligation authority during fiscal year 2003, additional releases came in sporadically. For example, in July 2003, TACOM reported that it had used all of its obligation authority but still had $22 million worth of contracts that needed funding; by August 2003, however, TACOM reported that it had funds available to continue awarding contracts. TACOM’s Tire Group complained that the ‘stop-start’ funding releases complicated their efforts in maintaining a consistent supply of tires from tire manufacturers by preventing them from providing a steady stream of funds in advance of production lead-time. TACOM’s Tire Group also did not know when or how much the next release of obligation authority would be. In order to ensure that the industrial base could provide supplies promptly, TACOM needed funding at least one procurement lead-time (e.g., the time it takes a manufacturer to make and deliver the tire) in advance of the delivery date. For most tires, the procurement lead-time is 3 to 6 months. Therefore, in order to meet unexpected surges in demand, TACOM needed to have funding available 3 to 6 months prior to the surge. In addition to the Tire Group, TACOM as a whole was underfunded in fiscal year 2003. Figure 24 shows that throughout fiscal year 2003, TACOM was funded below its actual requirements. At the beginning of fiscal year 2003, TACOM identified its requirements at $1,357 million; however, it was provided with only $885 million in obligation authority. By May 2003, TACOM came close to using all of its obligated authority without any assurance that additional funding would arrive. As a result, TACOM officials asked their support groups to conserve funding for the most critical items until additional funding arrived. However, in June 2003 TACOM received additional funding, which allowed item managers to resume awarding contracts for supplies. For fiscal year 2003, TACOM identified its actual requirements at $2,726 million (including $345 million for reset) but it received only $2,379 million in obligation authority. Distribution constraints, both in the continental U.S. and in OIF, contributed to customers not receiving supplies. The distribution system was not prepared to handle the volume of supplies ordered by customers or the speed with which supplies needed to be delivered. In the summer of 2003, the Defense Distribution Center Susquehanna, Pennsylvania, became overwhelmed by the volume of incoming shipments from contractors delivering supplies for units in Iraq. Because of the increased volume, the center gave contractors delivery appointment times that were 2 to 3 weeks in the future, thereby delaying the delivery and processing of many items, including tires. Once tires were in the distribution center’s warehouse, the requirement to build pallets to ship them to the theater caused further delays. Officials told us that the backlog of pallet building resulted in delays of up to 30 days or more before tire shipments could be released from the center. To alleviate this backlog, all tires shipped in and after June 2003 were diverted to the Defense Depot Red River, Texas, to be palletized and shipped directly to aerial ports of embarkations at Charleston and Dover Air Force Base. Once tires were shipped from the U.S., TACOM lost all visibility of tire shipments within CENTCOM’s area of responsibility. At the Port of Kuwait, containers could not be identified because radio frequency identification tags that should have been on the pallets were lost during shipment, thus increasing processing time. In addition, once these shipments left the port, receipts were not posted at the customer supply support center to verify delivery. Officials also stated that because of the lack of in-transit visibility, shipments were frequently diverted to other destinations without TACOM’s knowledge or authorization. TACOM initiated several temporary actions and one long-term action to improve the availability of tires to customers in the field. However, TACOM officials did not identify efforts to improve funding problems experienced during OIF, and they told us that they are not aware of any initiatives at AMC headquarters or the Department of the Army that address funding issues. To ensure that forecasted requirements better reflected actual demands, in June 2003, TACOM’s Tire Group changed the average demand base it used to calculate requirements from 12 months to 30 days. By making this change, the Tire Group captured demand data in real-time and allowed item managers to better estimate future requirements. As result, item managers were able to justify procuring more tires to meet future demands. To ensure continuous production while awaiting additional obligation authority, officials from TACOM’s Tire Group noted persuading manufacturers to continue making tires. Tire manufacturers continued making tires while waiting for contracts and made capital investments to procure more tire molds, enabling them to increase production once contracts were awarded and obligation authority became available. To ensure quicker distribution of tires to customers in theater, TACOM sent a group of supply personnel to Camp Arifjan in Kuwait to expedite the processing of TACOM’s shipments of tires and other spare parts. In response to complaints that TACOM’s tire and spare parts shipments were being diverted and not reaching the right customers, TACOM’s supply personnel also helped to look for these shipments and get them delivered. To help solve the long-term distribution problems in theater, in September 2003 the Secretary of Defense designated the U.S. Transportation Command (TRANSCOM) as DOD’s Distribution Process Owner. TRANSCOM established a Deployment and Distribution Operation Center in January 2004. Under the control of CENTCOM, this center is responsible for improving the distribution process within theater by directing all airport, seaport, and land transportation operations. The HMMWV is a highly mobile, diesel-powered, four-wheel-drive vehicle with a 4,400 pound payload. Using common components and kits, the HMMWV can be configured to become a troop carrier, armament carrier, shelter carrier, ambulance, anti-tank missile carrier, or scout vehicle. The initial number and type of HMMWVs in each unit is based on standard equipment lists. According to officials, they are the most numerous U.S. military vehicles in CENTCOM’s area of responsibility. The Army reported that there were 18,656 vehicles—both armored and unarmored—in theater, as of July 2004. One version of the HMMWV is a production model known as an Up-Armored HMMWV, also designated as the M1114 model (see fig. 25). This model is produced by AM General Corporation and armored by O’Gara-Hess Eisenhardt, requirements for CENTCOM’s area of operations, including Iraq and Afghanistan, call for this up-armored variant. The M1114 model of the vehicle features ballistic-resistant windows and steel-plate armor on the doors and underside to protect against rifle rounds and explosive blasts, fragmentation protection, and additional armor for the turret gunner on the roof to protect against artillery, as well as a powerful air conditioning system. In order to provide armor protection to existing unarmored HMMWVs in theater, the Army has developed an add-on-armor kit to be mounted on vehicles. The basic kit includes armored doors, under-door armor plates, seat-back armor, ballistic glass windows, and a heavy-duty air conditioning system. Seven Army depots and arsenals, managed by the Ground Systems Industrial Enterprise, currently produce the kits. The Army began shipping the kits to Iraq by mid-November 2003 and started mass production at their depots in December 2003. The Army also contracted with O’Gara Hess Eisenhardt to produce additional armor kits to meet theater requirements. Up-armored HMMWVs and add-on-armor kits were not available to the warfighter at some time between October 2002 and September 2004. We consider this item to have a shortage because vehicles and kits were not available to meet the validated requirements developed by the warfighters. The Army has been consistently unable to meet recurring spikes in demand for vehicles and kits. However, the overall impact of the Army’s inability to deliver the vehicles and kits is difficult to measure. Since the Coalition Forces Land Component Command first began identifying up-armored HMMWV requirements for CENTCOM’s area of responsibility in the summer of 2003, there has been a gap between the number of vehicles required and the number of vehicles the industrial base is producing. By September 2004, TACOM and the Army had provided 5,330 of the 8,105 required vehicles in theater. To meet Coalition Forces Land Component Command’s requirements, the Army program managers worked with O’Gara-Hess Eisenhardt to produce an additional 2,533 new up-armored HMMWVs and the Department of the Army redistributed an additional 2,797 existing vehicles to Iraq from elsewhere in the world. Figure 27 shows that Coalition Forces Land Component Command requirements for vehicles increased faster than O’Gara-Hess Eisenhardt was producing them, with requirements growing from 1,407 vehicles in August 2003 to 8,105 vehicles by September 2004. The Army worked with the manufacturers to increase production from 51 vehicles per month in August 2003 to 400 vehicles per month in September 2004. According to Army officials, O’Gara-Hess Eisenhardt will increase production to its maximum capacity of 550 vehicles per month and will meet current requirements by March 2005. Appendix X Up-Armored High-Mobility Multi-Purpose Wheeled Vehicle and Add-on-Armor Kit Dec. Jan. Feb. Mar. Apr. Aug. Sept. As of September 2004, the Army supplied 8,771 of the 13,872 Add-on Armor kits required by CENTCOM but still needed 5,101 additional kits to meet all requirements. The Ground Systems Industrial Enterprise depots and arsenals were required to produce 12,372 while O’Gara-Hess Eisenhardt was required to produce the remaining 1,500 kits. As shown in figure 28, by September 2004 the validated requirement of 8,400 kits grew to 13,872. To meet the 8,400 requirement, program managers worked with several Army depots to increase production from 35 kits a month in December 2003 to 600 kits per month by July 2004. At this production level, theater requirements would have been met by August 2004. However during this same month, Coalition Forces Land Component Command increased the requirement to 13,872 kits. Army officials stated that it would take 3 to 4 months to meet this new demand and accordingly expected the requirement to be met by early 2005. Appendix X Up-Armored High-Mobility Multi-Purpose Wheeled Vehicle and Add-on-Armor Kit Nov. Dec. Jan. Feb. Mar. Apr. Aug. Sept. transport for teams of engineers operating in the constricted urban environments of Iraq. There are two primary causes for the shortages of up-armored vehicles and add-on-armor kits. First, a decision was made to pace production rather than use the maximum available capacity. Second, funding allocations did not keep up with rapidly increasing requirements. DOD paced the production of armor for HMMWVs to meet initial CENTCOM requirements, but did not use the maximum available production capacity as the requirements increased dramatically after the onset of OIF. According to Army officials, the total Army up-armored HMMWV requirement prior to OIF was approximately 360 vehicles per year, to be produced at a rate of 30 vehicles per month. However, beginning in August 2003, Coalition Forces Land Component Command developed new requirements for additional up-armored HMMWVs based on requests from units in theater; the requirement increased 576 percent from 1,407 to 8,105 vehicles by September 2004. There was also a significant increase in the requirement for kits. In November 2003, the initial requirement for Add-on Armor kits was 8,400 kits. By September 2004, the requirement had increased to 13,872 kits. O’Gara-Hess Eisenhardt, the sole producer of the up-armored HMMWV, increased production, in accordance with agreements with the Army; however, that rate of production has not been sufficient to meet increasing demands. The schedule of monthly production increases agreed to by the Army and O’Gara-Hess Eisenhardt was based on meeting existing requirements established at a particular time as well as funding constraints. For example, the Army had requirements of 4,149 vehicles in February 2004 to meet CENTCOM’s needs. In meeting this requirement, the Army redistributed over 3,000 existing up-armored HMMWVs to CENTCOM’s area of responsibility and agreed to have O’Gara-Hess Eisenhardt to produce the rest of the vehicles. The Army had planned to meet the February 2004 requirement by July 2004 without having O’Gara-Hess Eisenhardt reach its maximum capacity. As shown in figure 27, the vehicle production rate has increased every month from 51 vehicles in August 2003 to 400 vehicles by September 2004, with a planned production of 460 vehicles per month by October 2004. However, the signed agreement with O’Gara-Hess Eisenhardt indicates that the maximum production could have been increased to 500 vehicles per month in October 2004 if needed. Interviews with Army and contractor personnel indicated that there were other constraints on production, such as the availability of communication equipment. Despite increasing requirements for the add-on-armor kits, additional available production capacity was not used. Prior to CENTCOM’s requirement for 8,400 kits in November 2003, the Army had already begun designing and shipping some ‘pilot’ kits in theater. When it received the requirements in 2003 for 8,400 kits, the Ground Systems Industrial Enterprise’s depots and arsenals began ordering raw materiel such as steel and ballistic glass and ramped up production from 35 kits per month in December 2003 to 3,998 kits per month in April 2004. As total production neared the 2003 requirement, production was slowed to 333 per month by September 2004. Because the kits take three to four months to produce, it was not until January 2004 that the depots and arsenals began shipping substantial quantities to theater. Our review of Army data and interviews with Army officials shows that additional capacity to produce kits was available within the Ground Systems Industrial Enterprise system. Managers at Ground Systems Industrial Enterprise indicated that seven arsenals and depots could have maintained the maximum level of production without affecting other operations at the depot, filling the kit requirement early in 2004. In addition, in February 2004, a contractor operated Army facility informed the Ground Systems Industrial Enterprise managers that it could produce another 800 4-door kits per month. While the managers stated that they did not use the contract operated facility due to issues with contract timing and price, they did not have information on the decision to slow the pace of production. DOD decision makers determined the pace at which both up-armored HMMWVs and kits would be produced, but did not inform Congress about the total available production capacity. We have not been able to determine what criteria were used to set the pace of production; however, in both cases, additional production capacity was available, particularly for the kits. As a result of the lack of visibility into and acceptance of decisions made about the rate of production, DOD received criticism about the availability of armored vehicles in Iraq. While funds were available to support the planned pace of production of up-armored HMMWVs, program managers were not aware of the time frame for releasing funds. Although TACOM received over $1.4 billion between fiscal years 2003 and 2004 to produce 7,502 vehicles, it was not released in a timely and predictable manner. Figure 29 shows that in August 2003, the managers received requirements for 1,407 vehicles. However, it had received funding to produce only 648 vehicles. By October 2003, program managers had a requirement to produce 3,279 vehicles, but received funding to produce only 1,456 vehicles. Significant differences continued until April 2004, when requirements reached 4,454 vehicles and the program managers received funding to produce 4,320 vehicles. Appendix X Up-Armored High-Mobility Multi-Purpose Wheeled Vehicle and Add-on-Armor Kit Dec. Jan. Feb. Mar. Apr. Aug. Sept. The disbursement of funds affected program managers’ ability to plan and contract with O’Gara-Hess Eisenhardt to produce sufficient quantities of up-armored HMMWVs. As shown in figure 29, requirements increased in June 2004 to 6,223 vehicles and again in August to 8,105 vehicles. However, additional funding—$572 million—was not received until August 25, 2004 to meet demands. As a result, Army officials stated it could not ask O’Gara-Hess Eisenhardt to ramp up to its maximum capacity of 550 vehicles per month because it did not have the funding at the time requirements increased. Furthermore, program managers explained that if O’Gara-Hess Eisenhardt is to efficiently produce vehicles at a consistent and high rate, the company should be assured of consistent funding at least 3 months in advance of delivery. The program officials stated that they did not know when funding would come, how many disbursements they would be receiving in a given fiscal year, or what amount of funding to expect, thus further complicating their procurement planning. The major short-term solution to the up-armored HMMWV funding issue has been the receipt of additional funding from congressional increases, supplemental funding, and Office of Secretary of Defense additions. For fiscal years 2003 and 2004, the Army received over $1.4 billion to produce 7,502 up-armored HMMWVs to meet worldwide requirements, including 8,105 vehicles required for CENTCOM’s area of operation, mostly from congressional increases and supplementals. Specifically in fiscal year 2004, the Army received $1.19 billion in congressional plus-ups, supplementals, and Office of Secretary of Defense additions above its $51.7 million received in the President’s Budget to produce more up-armored HMMWVs. To meet continuing needs for force protection, Congress recommended $865 million in the 2005 appropriations bill to be used by the Army to armor additional HMMWVs and other vehicles. As part of the Rapid Response Force Protection Initiative, Congress intends the funds to be used to purchase and modify a variety of vehicles currently used in theater to respond rapidly to the threat of improvised explosive devices and mortar attacks experienced by deployed U.S. forces. To improve the industrial capability, the Army worked with O’Gara-Hess Eisenhardt as well as Army depots to increase production of vehicles and kits. For example, program managers worked with O’Gara-Hess Eisenhardt to increase up-armored HMMWV production from an average of 30 vehicles a month to 400 vehicles a month by September 2004. The company plans to increase production to a maximum 550 vehicles a month to meet current requirements by March 2005. Army also ran 24-hour assembly lines at its depots and produced over 1,000 add-on-armor kits per week between March and April 2004 when materials were available to make the kits. At the time of this review, Army officials had not identified any long-term efforts to improve the availability of up-armored HMMWVs or add-on-armor kits. our recommendation to require that DOD specify when these actions will be completed. 12. DOD stated that it is taking a number of actions to develop a holistic information technology plan to improve distribution. According to DOD, this plan, which is expected to be completed in May 2006, will recommend systems integration solutions to synchronize end-to-end distribution. We remain concerned that until this plan is completed and its recommendations fully implemented, DOD and the services will not be able to achieve their goal of distributing the right supplies to the right places when war fighters need them. Therefore we have also modified our recommendation to require DOD to specify when the plan’s recommendations will be implemented. In addition to those named above, Nancy L. Benco, John C. Bumgarner, Michele C. Fejfar, Emily Gupta, Shvetal Khanna, Harry A. Knobler, Kenneth R. Knouse, Jr., Elizabeth D. Morris, Tinh T. Nguyen, Kenneth E. Patton, Terry L Richardson, Cary B. Russell, Rebecca Shea, Christopher W. Turner, John W. Vanschaik, Jason G. Venner, Gerald L. Winterlin, and John C. Wren made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
GAO has identified spare parts supply as a long-standing Department of Defense (DOD) management problem. In December 2003, GAO reported on problems with Operation Iraqi Freedom (OIF) logistics support including shortages of spare parts and supplies in Iraq. This report expands on that effort by assessing (1) what supply shortages were experienced by U.S. forces in Iraq between October 2002 and September 2004 and what impact the shortages had on their operations, (2) what primary deficiencies in the supply system contributed to any identified supply shortages, and (3) what actions DOD has taken to improve the timely availability of supplies for current and future operations. To address these objectives, GAO judgmentally selected nine items based on lessons learned and after-action reports that represented possible shortages with operational impacts. U.S. troops experienced shortages of seven of the nine items GAO reviewed. According to the 2004 National Military Strategy, U.S. forces expect to have sufficient quantities of the right items at the right time. However, demand for the seven items exceeded availability sometime between October 2002 and September 2004. The documented impact of these shortages varied between combat units. For example, while units in the 3rd Infantry Division reported that tire shortages reduced their operational capability, forcing them to abandon equipment, the 4th Infantry Division reported no similar effect. GAO identified five systemic deficiencies that contributed to shortages of the reviewed items, including inaccurate Army war reserve spare parts requirements and ineffective distribution. Annual updates of Army war reserve parts requirements have not been conducted since 1999. As a result, the war reserves did not contain enough track shoes, batteries, and tires to support U.S. forces during initial operations. Effective distribution relies on a seamless process to promptly move supplies from the United States to a customer. GAO found that conflicting doctrinal responsibilities for distribution management, improperly packed shipments, insufficient transportation personnel and equipment, and inadequate information systems prevented the timely availability of four of the items. While U.S. troops developed short-term solutions to manage item shortages during OIF, DOD and the services have begun to undertake systemic, long-term changes to fix some supply problems identified. While GAO did not evaluate their potential for success, the majority of the changes are focused on distribution, and not on the full gamut of systemic deficiencies GAO identified.
Between 1999 and 2001, the OECD member nations negotiated environmental policy framework for their ECAs. (See fig. 1.) With the exception of the United States, OECD members agreed in November 2001 to voluntarily implement a version of this framework, known as the Common Approaches. This process of negotiation followed a 1997 communiqué from the Group of Eight (G-8) that indicated a strong interest in negotiating common environmental guidelines for ECAs. At the 1999 G-8 summit, the heads of state issued a second communiqué stating that they hoped for agreement within the OECD by their 2001 summit, a deadline the G-8 reiterated the following year. The United States led the effort to regularly place common ECA environmental procedures on the G-8 agenda. The United States sought to promote uniform standards because it was concerned about unequal export market conditions and growing concern from nongovernmental organizations (NGO) that ECA-funded projects had the potential to cause significant environmental harm. However, the most current version of the Common Approaches required neither public disclosure of project information nor establishment of a single set of technical standards. Because the United States believed that these provisions were essential, it objected and said it would block the agreement if it were sent before the OECD Council for approval. Twenty-eight of 29 OECD members of the Working Party on Export Credits and Credit Guarantees (ECG) subsequently opted to voluntarily adhere to the Common Approaches, thus implementing the framework without a formal decision. Application of the Common Approaches began on January 1, 2002. (See app. II for a list of OECD participants and ECG members that are implementing the Common Approaches.) Some ECAs have used environmental guidelines from the World Bank Group as a model for the development of environmental procedures and standards to use in evaluating projects. The environmental guidelines of Ex-Im Bank, for example, were developed using World Bank standards as a reference point. While the environmental review policies of different organizations within the World Bank Group vary somewhat, they generally follow a similar screening and categorization process (see fig. 2). The World Bank’s environmental policies include evaluation and technical standards, such as those for emissions, which are laid out in the Pollution Prevention and Abatement Handbook. It also has a set of qualitative environmental and social standards, known as safeguard policies. These outline broad project and evaluation expectations, including guidelines on involuntary movement of peoples, impacts on cultural property, and conservation of natural habitats. Representatives of both the business community and the environmental NGO community have expressed considerable interest in the issue of environmental review procedures for ECAs, but for different reasons. Some businesses in the United States are concerned that if Ex-Im Bank maintains environmental standards more stringent than the standards of some competitor ECAs, the additional project review and mitigation costs may hurt U.S. exports. Businesses in other OECD countries are also concerned that disclosure requirements, which the United States proposed in OECD negotiations, will make sensitive business information public. In contrast, environmental NGOs in numerous countries have become interested in ECA environmental review standards as a result of actual or proposed ECA funding of large potentially environmentally harmful projects as the Three Gorges Dam in China, the Chad-Cameroon oil pipeline, and the Camisea gas field development project in Peru. (See app. III for descriptions of selected ECA projects.) NGO representatives stated that ECAs, as public entities, should provide members of the public with the opportunity to provide input on projects that their governments are supporting. A number of these NGOs have joined in a campaign to change ECA practices to include strong review standards and open disclosure policies. While most OECD members have adopted a common environmental policy framework through the Common Approaches, some notable differences remain in their ECA environmental review procedures and policies. Because the Common Approaches is only a framework, it allows important differences in members’ national polices in certain key areas, such as the application of technical standards and the disclosure of project-specific information. Ex-Im Bank guidelines provide much more specific detail than the Common Approaches framework (see table 1). Ex-Im Bank guidelines clearly describe which types of applications must undergo environmental review. For those transactions requiring review, Ex-Im Bank guidelines contain nine detailed sector tables delineating specific environmental requirements. Ex-Im Bank guidelines also provide for a public disclosure period prior to a final decision by its Board of Directors. During this period, it lists the name and location of projects which will be subject to an environmental review and makes certain environmental information available upon request. In contrast, the Common Approaches is a framework that allows for variations in project review specifics. The Common Approaches framework differs from Ex-Im Bank guidelines in that it does not specify the use of a single set of technical standards, does not establish a set review procedure, and does not require public disclosure of project information prior to final funding decisions. Adherents to the Common Approaches are, however, expected to assess projects using specific standards selected by the ECA involved, categorize projects according to environmental risk level, and annually report to the OECD information on environmentally sensitive projects. When the OECD members developed the Common Approaches, they effectively established a framework to create or update their own ECA environmental review policies. Most OECD members have taken action to implement the Common Approaches’ provisions. Some members, including Belgium, Greece, Hungary, and Portugal, did not previously have any environmental review practices but since adopting the Common Approaches have taken steps to create them. These include hiring staff to review projects for environmental concerns or training current staff to do so and implementing procedures for reviewing potential projects. Many of the members that had environmental policies in place before adopting the Common Approaches have revised those policies since January 2002 to adhere to the Common Approaches. For example, as shown in appendix IV, ECAs in the six countries we visited have made revisions based on the provisions of the Common Approaches. Other OECD members have made similar revisions. For example, Norway introduced an environmental review policy in 1998 but reviewed it in 2003 to be sure it conformed to the Common Approaches. Most OECD members are now following similar basic procedures for reviewing sensitive projects. For example, most of the countries we visited require applicants for financing to complete a questionnaire regarding the potential environmental impacts of their proposed project, which the ECA uses to categorize the project. Projects likely to have significant adverse environmental impacts are placed in category A, while projects with questionable environmental impacts are classified as category B. Most ECAs place projects with little or no potential environmental impact in category C. Most ECAs require applicants to complete an environmental impact assessment if their projects are placed in category A (high risk of environmental impact). ECA projects may be approved despite adverse environmental impacts. Each ECA we visited relies on the judgment of its experts to evaluate the overall environmental impact of projects. Moreover, several ECAs, including Ex-Im Bank, allow their Boards to approve projects, notwithstanding the results of the environmental review. For example, Canada's ECA cites grounds where it could approve projects with adverse impacts under certain circumstances, including if it believes that the project represents an opportunity to improve environmental conditions in the host country or transfer environmentally sound technology and services. Despite the commonalities among OECD members’ environmental impact review systems, differences exist in how ECAs review potential projects and report on projects they undertake. These differences involve the application of technical standards and the disclosure of certain information. OECD members vary in terms of the technical standards they use to assess environmental impacts. We found that it is common for members to use World Bank technical standards for their reviews. For example, most countries review projects for compliance with World Bank technical standards regarding air quality, greenhouse gas emissions, water consumption, and waste management (see table 2 for selected examples of air emissions and water quality not-to-exceed standards). While most members use World Bank technical standards, many also look to standards that other organizations have established. The World Bank has not updated its technical standards since 1998, and some officials stated that they prefer standards that are more up to date. For example, several ECA officials said they use World Bank standards in the majority of cases; but in some situations, standards set by the World Health Organization or the European Union are more appropriate or current. Canadian ECA officials reported that they use standards of the World Bank, World Health Organization, Canada, and regional development banks as benchmarks in their reviews. The French ECA established its own set of technical standards for three sectors. These sectors involve the most environmentally sensitive projects and represent a large portion of France’s ECA-financed exports: conventional thermal power plants, large dams, and oil and gas projects. The standards contain a minimum set of criteria, which is largely linked to World Bank standards. In addition, French ECA officials said they encourage but do not require applicants to meet best practice standards, based on the best available technology or practices within the project’s sector. The German ECA, in contrast, does not rely on a defined set of environmental standards. It requires that all projects meet the environmental standards of the country in which the project is being constructed. German ECA officials stated that if the host country’s standards are not comparable with internationally recognized standards or German national environmental standards, additional information is required before approval. ECAs do not commonly follow the World Bank safeguard policies on social impacts. During the OECD negotiations there was no consensus on how to account for these impacts, so they are not a part of the Common Approaches. However, some ECAs have taken steps to include considerations for social impacts in their environmental standards. For example, the British ECA requires applicants to answer questions about social and human rights impacts during the screening process. Projects that will have social impacts must submit a social impact assessment or some mitigation plan to address those potential impacts. The Japanese ECA has also included provisions on social impacts, including impacts on indigenous peoples, in its environmental policy. Ex-Im Bank also considers social impacts in its reviews and has specific guidelines in two of its nine sector tables (forestry operations and hydropower and water resources management). ECA policies regarding public disclosure of project information vary. Some OECD members do not routinely disclose environmental or other project information, some disclose information after project approval, and others disclose before they make approval decisions. Many experts stated that, although ECAs are publicly financed, they are commonly less open about their activities than other government agencies because of their private sector orientation. Some countries such as Belgium, Germany, Portugal, and Spain cite national laws and regulations prohibiting disclosure of some information regarding export credit transactions as a barrier to disclosure of project information. Several ECAs provide information to the public, but only after an export credit transaction has been signed. For example, Canada and France are willing to make environmental information about their projects available to the public after the transactions have been approved. Some ECAs are taking steps to provide environmental information on projects to the public before making a decision on whether to approve the project for financing. This is known as “ex ante” disclosure, the policy practiced by the United States, and often involves a public comment period in which outside parties are invited to submit comments on projects that will then be incorporated into the ECA’s environmental review. For example, although it has a law restricting disclosure of project-specific information without consent from the financing applicant, the British ECA announced in April 2003 that it would publish information on its Web site about the environmental impacts of its most sensitive projects before making a financing decision. Officials from the British ECA said they made this policy change because they understand that environmental information often becomes public through third parties anyway. In addition, they believe that full disclosure of environmental information is an appropriate policy. They also cited pressure from nongovernmental organizations as a factor in their policy change. Japan and Australia have made a similar commitment to disclosing environmental information about their most sensitive projects before finalizing an export credit agreement, with consent from their exporters. For example, following the environmental screening process, Japan’s ECA discloses the project name, location, and sector, and reason for its category placement. In addition, for projects that are more sensitive, the ECA publishes on its Web site the status of major environmental and social documents prepared by or on behalf of the exporter, such as environmental impact statements, and makes these documents available to the public. The Japanese ECA also says that it encourages input from concerned organizations or stakeholders regarding the environmental impacts of projects under review. Australia’s ECA has also adopted an ex ante disclosure policy. It provides for a 45-day public consultation period for accepting and reviewing comments from outside parties prior to final project approval. In addition to the differences in their use of technical standards and disclosure policies, ECAs differ in implementing their environmental policies, specifically their criteria for categorizing and defining projects. In instances where several ECAs provide financing for a single project, they might place the project in different categories. A mining project, for example, might be categorized as high risk (category A) in one country, and medium risk (category B) in another. The Common Approaches has no prescriptions requiring countries to place specific types of projects in particular categories, thus allowing categorization to be a subjective activity that depends on the opinion of the official reviewing the project. In addition, the very nature of how to define a project can be in dispute. For example, officials from one ECA described a situation where another ECA treated a project with multiple components as a single project for categorization purposes, while they categorized each component separately. OECD members are currently reviewing their efforts to voluntarily abide by the terms of the Common Approaches and may propose an alternative version by the end of 2003. However, a number of factors, including the resistance of some of the participants to certain proposed policies, present challenges to revising the Common Approaches. Nevertheless, several developments outside the formal OECD negotiations, including a series of meetings between ECA environmental experts, may lend some momentum to advancing the Common Approaches. OECD members are in the process of reviewing the Common Approaches. The most recent version of the Common Approaches contains a provision stating that the ECG will review all aspects of the draft to enhance it. Participants stated this is typical of OECD multilateral negotiations, which often begin with general principles and gradually advance to a more detailed, comprehensive agreement. In this regard, officials from most of the countries we visited agreed to voluntarily comply with the terms of the Common Approaches. They stated that the most recent version is a good first step toward achieving a common approach to environmental standards for ECAs. For example, one official stated that, given the inexperience of many ECAs applying environmental standards, it would take several years for OECD members to accept guidelines similar to the Ex-Im Bank’s. A key aspect of the review process for the Common Approaches is the annual reporting among members of information about sensitive projects. During the negotiations, most members would not support prior disclosure of projects, which would have allowed the public to evaluate the application of environmental standards before projects are approved. As a compromise, members agreed to report annually on sensitive projects to evaluate how countries are abiding by their voluntary obligations. The Common Approaches states that members shall provide certain details about projects that members classified as either category A or B projects exceeding 10 million special drawing rights. The required details include a brief description of the project, its sector, the type of environmental review conducted, and the standards or benchmarks used in the review. Some ECA officials stated that the quality of reporting was not uniform across ECAs. They added that some of the countries have been very forthcoming with information but others have not. For example, in several instances the project’s host country was not identified, making it difficult to assess the technical standards used to review the project. Several key meetings in 2003 will give ECG members an opportunity to review and potentially revise the initial version of the Common Approaches. In April, ECG members discussed the results of the first annual report and agreed to provide recommendations for modifications to the Common Approaches to the ECG Chair by July 2003. The Chair plans to summarize these recommendations, which ECG members will then discuss in September. The final meeting in November 2003 may then serve as the venue for agreement on a revision of the Common Approaches that can be put to the OECD Council for a formal decision, according to OECD officials. Any revisions or enhancements to the Common Approaches during 2003 may be limited because of the nature of the OECD negotiating process and the resistance of many members to some of the more controversial aspects of environmental guidelines. The United States is therefore unlikely to fully achieve its original negotiating objectives, although most OECD members would like the United States to accept the Common Approaches as a formal OECD agreement. The institutional framework of the OECD makes dramatic changes to the Common Approaches unlikely. The OECD commonly uses a combination of dialogue, peer review, and other forms of noncoercive peer pressure to encourage members to coordinate policies. In addition, OECD committees operate by consensus. Controversial topics can therefore be blocked by any single member, as the United States did with the Common Approaches. While such blocking is considered extreme and rare, according to OECD officials, it ensures that OECD policies evolve gradually. Several specific factors make it difficult to go beyond incremental changes to the Common Approaches, particularly in areas of interest to the United States. First, while the United States has sought to negotiate a firm set of technical standards that all OECD members would have to use in their reviews, most ECA officials we spoke with prefer to apply a flexible approach to technical standards. Another, more difficult, obstacle to surmount is the resistance to disclosing project information. While the United States has pushed for ex ante disclosure of project information, other ECG members are either unable or unwilling to do this. In addition, some other ECG members are unwilling to adopt disclosure practices that are significantly advanced over their major ECA competitors. For example, the Canadian ECA pulled back a proposed ex ante disclosure policy once it was clear that the Common Approaches would not require such a policy, out of concern that the competitiveness of Canadian exporters might be compromised. A final impediment to achieving a more than incremental advance in the Common Approaches is the effect of competing pressure on ECAs by both public interest and business groups. OECD members’ positions on environmental standards reflect an internal balance achieved in response to domestic pressure. While nongovernmental organizations in some OECD countries were successful in getting their governments to push for the start of negotiations on environmental standards for ECAs, they have been less successful in achieving their objectives in the negotiations. Nongovernmental organizations in all the countries we visited are uniformly displeased with the results of the Common Approaches to date and are pressing for a broader Common Approaches that includes human rights, labor, and other social issues as part of the review process. However, business groups we met with are resistant to expanding the scope of the Common Approaches. While some ECAs, such as those of the United Kingdom and Japan, may unilaterally take up these social issues, most countries in our sample are not yet ready to consider adopting them. This reluctance occurs primarily because business concerns are considered of paramount importance to legislators at this point, according to several experts. It will be difficult for the United States to fully achieve the objectives it sought at the conclusion of the Common Approaches negotiations in 2001. The United States no longer has the level of influence it had at the start of the negotiations. This is because the United States did not join the Common Approaches, which remains a source of resentment, and Ex-Im Bank no longer has the unique environmental expertise that it once did. Nevertheless, OECD members see benefits if the United States signs an OECD agreement. All the ECG members we met with stated that they would like to see the United States accept the OECD’s Common Approaches. Some of these officials believe that a formal OECD agreement will provide ECAs with a stronger basis for improvements and convergence. For example, some officials note that a multilateral agreement permits countries to bring ministerial pressure to bear on issues. This is not possible under the current framework, which is supported under a voluntary agreement. Several recent events, including an informal effort by ECG environmental experts, may lend momentum to the negotiations. At the negotiations’ outset, many ECG members did not have environmental guidelines and were reticent to negotiate on unfamiliar technical issues. However, as OECD members become more familiar with the application of environmental standards for ECAs, the likelihood of compromise increases, according to a number of the participants. Participants view a recent effort to share information among ECAs as a particularly promising vehicle for increasing their familiarity with technical aspects of environmental reviews. After the cessation of the negotiations in 2001, some of the members that had environmental experts (practitioners) in- house began to meet informally to discuss technical issues that were not addressed during the negotiations. These included issues such as defining a “greenfield site,” applying technical standards in specific instances, and more generally defining a project for the purpose of environmental review. To date, three such practitioners’ meetings have been held, with broad participation by ECG members. While the practitioners’ meetings are an unexpected consequence of the conclusion of the negotiations, ECA officials we spoke with stated that the meetings may help advance the Common Approaches. First, they have been very useful in giving practical information on technical issues to ECG members that have only recently adopted environmental guidelines. In addition, they may provide helpful information to the negotiators on technical issues. For example, the practitioners have recently created four subgroups to focus on issues in specific sectors that will report back to the ECG on their findings. Finally, the practitioners may also provide members with some assurance that the terms of the Common Approaches are being met. As one official told us, the practitioners can ask specific questions of one another about how environmental standards were applied to specific projects. This information would not otherwise be available through the formal annual reporting process. Another development that may lend some momentum to advancement in the Common Approaches is the commercial banking sector. Recently, 15 of the world’s leading project finance institutions agreed to apply a set of principles incorporating environmental reviews of their projects. These principles, called the Equator Principles, set out provisions calling for the application of World Bank technical standards in the Pollution Prevention and Abatement Handbook and the International Finance Corporation safeguard policies standards for projects costing $50 million or more and for which project sponsors are seeking direct lending from the banks involved. The banks that follow the Equator Principles pledged that they will screen and categorize projects based on environmental risk. They also will require environmental assessments demonstrating compliance with the World Bank guidelines for projects with high or medium environmental or social risk. While adherence to the Equator Principles is voluntary, it indicates a growing understanding in the commercial banking sector of the importance of assessing environmental risk along with credit risk for these types of projects. Some officials believe that this is evidence that the business community is increasingly accepting the environmental assessment process as the norm for large development projects. This development may exert a positive influence on the ECA negotiations. There is limited evidence that Ex-Im Bank’s environmental guidelines have affected U.S. exports, although the complexity of potential effects and the lack of information make identifying and quantifying impacts difficult. The evidence we reviewed indicates that any impacts are likely to be concentrated in certain areas, especially the energy sector. The majority of projects authorized by Ex-Im Bank do not require significant environmental review, and most projects in the full environmental review category are in the energy sector. Almost all are project finance cases. Trends in Ex-Im Bank financing to sectors where environmental reviews have been concentrated do not show clear impacts, and available data on applications and approvals are not adequate to capture decisions early in the applications process or through informal channels. Finally, we found that the evidence of business impacts is largely anecdotal and lack of data makes objective quantitative analysis difficult. A substantial portion of Ex-Im Bank financing does not require significant environmental review. Ex-Im Bank’s environmental reviews are concentrated in the energy sector, largely because of the financing structure of many energy projects. Energy sector projects are expected to be of continuing importance to Ex-Im Bank because of rising energy demand in developing countries. Only about one third of long-term Ex-Im Bank financing undergoes an environmental review after initial screening. Out of 522 long-term transactions authorized by Ex-Im Bank from October 1995 to May 2003, 42 were subject to a full environmental review, and 181 were subject to a medium review. As figure 3 shows, these transactions represented 14 percent and 22 percent respectively of long-term Ex-Im Bank financing in terms of contract value. The remaining long-term transactions were only subject to an initial screening. This is because Ex-Im Bank’s guidelines exempt from further review certain categories of projects considered to have little or no potential environmental effects, such as sales of aircraft, locomotives, and air traffic control systems. The remainder of Ex-Im Bank financing does not undergo environmental review. This is because medium- and short-term transactions are generally not subject to either screening or review. Our analysis of Ex-Im Bank data showed that about 40 percent of its financing is for short-term transactions. Environmental reviews of Ex-Im Bank’s long-term financing tended to be concentrated in the energy sector. For example, from October 1995 to March 2003, authorized transactions that underwent full environmental reviews were mainly energy-related transactions (that is, thermal power plants, oil and gas development, hydropower plants). Of the 42 that went through a full environmental review, 16 were for thermal power plants and 9 were for oil and gas exploration projects. (See fig. 4 for the sector breakdowns for full review projects.) Energy-related projects represent a high percentage of projects undergoing a full environmental review, largely because many are financed under project financing terms, which signifies greater overall financial risk to Ex-Im Bank. For example, the 16 thermal power projects and 9 oil and gas exploration projects were all financed under project finance terms. Energy-related projects have been an important part of the financing portfolio for both Ex-Im Bank, as noted above, and for other ECAs. For example, financing for energy sector transactions represented about 27 percent of Ex-Im financing during the 1990s and was 47 percent in 1995. In 2001, out of $12.5 billion of U.S. exports supported by Ex-Im Bank, nearly $2 billion was in energy sectors, including electric power generation and transmission and oil and gas explorations and refineries. In 2001, oil and gas facilities accounted for 38 percent of the Japanese ECA’s financing, and power generation and transmission projects accounted for 25 percent of the British ECA’s financing. According to the OECD, 36 percent of OECD member projects (18 out of 50) that required full environmental reviews (category A reviews) in 2002 were energy projects, and these projects accounted for 48 percent of ECA financing. Energy sector financing is expected to continue to be important for ECAs because of projected increases in energy demand and associated investment needs in developing countries. The International Energy Agency’s 2000 World Energy Outlook projects that over the next 2 decades, nearly $3 trillion worth of investment in worldwide electricity generating capacities will be needed, not counting the need for transmission and distribution network sectors. The same report projects that world electricity generation is going to increase at an annual rate of 2.7 percent until 2020 and nearly 3,000 gigawatts of new generating capacity is projected to be installed around the world, with more than half of this in developing countries, especially in Asia. The report also projects that OECD countries’ share of world energy demand will continue to decline while developing nations' share will accelerate. For the types of ECA projects that are subject to environmental reviews, available data are limited and do not show clear impacts, and assessments are difficult because of the complex interplay of factors affecting financing and export trends. Trends in Ex-Im Bank financing to sectors where environmental reviews have been concentrated do not show clear impacts. In addition, available data on applications and approvals are insufficient for analytical purposes because they do not capture decisions early in the applications process or through informal channels. Further, environmental policies are only one among many factors that may affect the competitiveness of U.S. exports, and impacts vary depending on the nature of the exporter. At the company level, business views on the impacts of environmental guidelines are mixed. While many business representatives we spoke with have concerns about the environmental review process, including project delays, additional costs, and disclosure, most evidence is anecdotal. Several business representatives said they were less concerned about meeting technical standards of environmental guidelines than about dealing with uncertainties associated with the environmental review process, including reactions to the public disclosure of project information. We could not generally assess the magnitude or the extent to which the concerns reflected actual impacts caused by environmental guidelines. In addition, some business representatives stated that meeting Ex-Im Bank guidelines was consistent with their own requirements to identify issues that could potentially undermine projects. Trends in Ex-Im Bank financing to certain environmentally sensitive sectors do not show evidence of impacts of environmental guidelines, although a simple trends analysis would not be able to isolate those impacts from others. We reviewed Ex-Im Bank’s financing in four sectors: thermal power, oil and gas development, hydro power, and metal mining. Table 3 illustrates the share of authorized financing to these sectors for periods before and after the adoption of Ex-Im Bank guidelines. The proportion of financing to oil and gas development projects stayed about the same after the implementation of environmental guidelines. Financing of thermal power plants experienced a drop, and metal mining an increase. An additional data limitation is that formal decisions on Ex-Im Bank projects provide only partial information regarding the impact of environmental guidelines on projects for several reasons. First, several companies said they use informal channels to determine whether environmental issues are likely to be a stumbling block before they submit final applications and that they might not do so if they anticipated concerns. Second, projects may be withdrawn or cancelled throughout the application process for any number of reasons that are not publicly reported. Finally, some projects that might have been submitted to Ex-Im Bank in the past may have been withheld because of the belief that Ex-Im Bank may no longer be willing to approve applications for certain types of environmentally sensitive projects, although it is impossible to determine the extent of this phenomenon. The Ex-Im Bank Board of Directors reviews the environmental effects of projects on a case-by-case basis, and may approve a project that does not meet all Ex-Im Bank environmental guidelines, considering significant mitigating effects and circumstances. Financing may be conditioned on the implementation of mitigating measures. the information, and the project eventually proceeded with financing from other sources. Environmental policies are only one of many among many factors that affect the competitiveness of U.S. exports financed by Ex-Im Bank. Other factors include various Ex-Im Bank policies such as domestic content requirements, its application process and underwriting requirements, and the terms of coverage its policies provide. Other competitiveness factors are unrelated to Ex-Im Bank policies, such as foreign exchange rates and the geographic location of projects. In addition, factors such as the technological specifications of U.S. exports can be important to sourcing decisions. For example, one multinational company told us that whether the host country has 50 cycle or 60 cycle electricity technology is the overriding factor for determining where their products are going to be manufactured. The potential impacts of ECA environmental guidelines on U.S. exports depend in part on the overall business structure of the firms seeking ECA financing. For businesses that produce, or source, their products in the United States, the implementation of common environmental guidelines across ECAs should theoretically lower the threat of losing businesses to other ECAs with lax environmental standards. Since these companies are generally confined to doing business with Ex-Im Bank, they would otherwise lose export business if project sponsors select another ECA instead of Ex-Im Bank; therefore these companies have been the strongest business advocates for common guidelines. However, multinational companies may not be affected to the same degree. Officials from several of the companies we met with stated that as multinational companies they have been able to get financing from ECAs other than the U.S. Export- Import Bank. These companies are large and flexible enough that they can seek financing from ECAs in other countries where they have a business presence if they believe that Ex-lm Bank’s policies, including its environmental review process, would constitute a significant barrier to winning a project. Business views on ECA environmental guidelines are mixed. Some business representatives we spoke with expressed concerns about specific impacts of the environmental review process, such as delays and costs. Others were concerned about the impacts of more intangible aspects that lend uncertainty to the process, such as public disclosure of project information. Some business representatives also acknowledged that their businesses are integrating the ECA environmental review policies and procedures into their own risk assessment processes. Representatives of several companies cited delays during the project approval process as the key impact of the environmental assessment. Ex- Im Bank often asks for additional information from the project sponsors and suppliers to supplement the initial environmental impact assessment submitted. In our review of 24 thermal power plant, oil and gas development, and metal mining projects authorized by Ex-Im Bank between 1995 and 2003, we could not determine if the environmental assessment caused any project delays claimed by the companies. We found delays in some instances related to the gathering and submission of existing documentation to Ex-Im Bank for review and discussion of any outstanding issues, but the records were insufficient for attributing delays to environmental reasons as opposed to financial or other issues. We did find that Ex-Im Bank, in some cases, took measures to limit delays caused by environmental reviews and requirements. This included sending staff to review documentation in country, and making project support contingent on certain documents being provided at a later date. Business representatives also cited additional costs as an area of concern, especially when project costs increased due to modifications necessary to meet environmental requirements. We found that in some instances Ex-Im Bank engineering staff did require project modifications to meet Ex-Im Bank guidelines for the 24 projects we reviewed. For example, a coal-fired power plant located in China met all of the air quality standards except for particulate emissions. The Chinese-built pollution control device met local standards but daily emissions would exceed the Ex-Im Bank guidelines. The local operator agreed to operate the device at a slightly higher control efficiency, which reduced emissions sufficiently to meet Ex-Im Bank’s daily emission limit. Ex-Im Bank officials noted that, as companies have become more familiar with Ex-Im Bank guidelines, new projects are now much less likely to require modifications upon review. Some businesses are also concerned about other aspects associated with the environmental review process. Many business representatives we spoke with believe that their products can readily meet the technical standards of environmental guidelines. They are concerned, however, about aspects that may result in lost business. For example, some elements of Ex-Im Bank’s environmental guidelines require a more qualitative judgment of project impacts, such as how to mitigate socioeconomic and sociocultural impacts (such as those associated with the dislocation of people). However, some business representatives stated that these more qualitative areas of environmental standards present challenges and risks to businesses, because of the importance of other parties such as host governments in making and carrying out commitments. Disclosure of project information during environmental review is another concern for some businesses. Some companies are concerned that disclosure of project information may result in their losing business to competitors if their competitors become aware of a project through the disclosure process. Other companies were also concerned about the potential impacts of public scrutiny. One company representative said that part of the reason the company’s sourcing has shifted to Europe was because of Ex-Im Bank’s disclosure policy, since European ECAs do not disclose information prior to project approval, although most did not identify differences in environmental guidelines as the determining factor in sourcing decisions. We did not find specific examples where the disclosure of project information had negative impacts. Company representatives we spoke with did not provide us with any specific cases where they lost business because of the publication of the environmental impact assessment; their concerns were primarily hypothetical. The environmental impact assessments we reviewed did not contain any business proprietary information and did not contain information on the specific companies involved in the projects. According to Ex-Im Bank officials, any such information would be removed by the applicant or owner of the environmental assessment prior to the release of the document to interested parties. Some companies have acknowledged that they are integrating the ECA environmental review policies and procedures into their own risk assessment processes. For example, several companies said that environmental review is increasingly viewed as a key component of their overall due diligence, which they conduct regardless of ECA requirements. Companies also acknowledge that since environmentally sensitive projects are coming under increasing NGO scrutiny, their reputations may be at risk if the projects they are involved in are deemed to be environmentally damaging. We provided a draft of this report to the Secretaries of State and the Treasury, and the Chairman of Ex-Im Bank. The Department of the Treasury and Ex-Im Bank provided written comments on the draft report, which are reprinted in appendixes V and VI, respectively. The Department of the Treasury considered the report well balanced, but also emphasized its belief that U.S. leadership on this issue has had a significant positive impact among export credit agencies, despite the lack of a formal OECD agreement. Ex-Im Bank stated that the report provides a thorough analysis, but emphasized its view that, despite progress, the broad nature of the Common Approaches does not yet level the playing field for U.S. exporters. The Department of State did not provide formal comments. We are sending copies of this report to interested congressional committees, the Secretaries of State and the Treasury, and the Chairman of Ex-Im Bank. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Other GAO contacts and staff acknowledgments are listed in appendix VII. The Chairman of the House Committee on International Relations and the Chairman of the Subcommittee on Europe, House Committee on International Relations asked us to examine the effect of environmental standards for export credit agencies. In response, we assessed (1) the achievements of the Common Approaches and the remaining differences among the members of the Organization for Economic Cooperation and Development (OECD), (2) the prospects for further advancement on common environmental guidelines for export credit agencies, and (3) the impact that environmental guidelines for export credit agencies may have on U.S. exports. To identify the achievements of the Draft Recommendation on Common Approaches on Environment and Officially Supported Export Credits (Common Approaches) and the remaining differences among the OECD members, we met with and obtained information from officials at the OECD secretariat, export credit agency (ECA) officials in a number of OECD member countries (Belgium, Canada, France, Germany, Japan, and the United Kingdom), and from several U.S. government agencies. Specifically, we interviewed officials in the OECD Trade Directorate’s Export Credit Division and reviewed OECD documents presented in meetings of the Working Party on Export Credits and Credit Guarantees (ECG). We also met with ECA and other government officials in Belgium, Canada, France, Germany, and the United Kingdom. In addition, we met with senior officials from the Japanese ECA in Washington, D.C. We reviewed and compared the environmental policies of these countries’ ECAs as well. We also met with officials from the U.S. Departments of the Treasury and State and the Export-Import Bank. We obtained an understanding of the environmental policies of each ECA we visited based on information we received in interviews and the documents we were provided. We reviewed and compared the ECA policies according to key procedural elements we identified, such as the screening and categorization of projects, the technical standards used during the review, and public disclosure policies. To determine the prospects for further advancement on environmental guidelines for ECAs, we interviewed and obtained information from OECD, ECA, and other officials from Belgium, Canada, France, Germany, Japan, and the United Kingdom. We also interviewed representatives from nongovernmental organizations (NGO) active in ECA issues in Belgium, Canada, France, Germany, the United Kingdom, and the United States. In addition, we interviewed business groups knowledgeable about export credit issues in Canada, France, Germany, the United Kingdom, and the United States to understand their views on the progress that OECD countries have made since the conclusion of the negotiations and on what they believe will and should happen next. We gained these officials’ perspectives on their goals for further negotiations on ECA environmental guidelines and what additional provisions they would like to include in the next revision of the OECD Common Approaches. We also reviewed documents from the OECD detailing members’ experiences with implementing the Common Approaches. To understand what impacts environmental guidelines for export credit agencies may have on U.S. exports, we met with, and obtained and analyzed data from, officials at Ex-Im Bank and representatives of U.S. businesses. We first obtained and analyzed data from Ex-Im Bank on long- term transactions that had been authorized by Ex-Im Bank to determine the number of transactions and the amount of Ex-Im Bank financing that falls into each of the three environmental risk categories. We determined that Ex-Im Bank data were sufficiently reliable for analyzing for this engagement, based on our assessment of the completeness and accuracy of the data. We reviewed the data to determine industry sector representation in each of the categories. We then selected 24 of the authorized projects to more specifically determine how they had been affected by Ex-Im Bank environmental guidelines. These 24 projects were selected using several criteria. First, we focused on the three industry sectors (thermo power, oil and gas development, and metal mining) representing about 70 percent of non-nuclear long-term higher risk projects. We also selected projects from the entire period that Ex-Im Bank’s guidelines were in effect. Finally, we selected projects that received both a full and a medium environmental review, with an equal number in each category for oil and gas and thermo power. We selected all four metal mining projects, since there was a limited number. We analyzed Ex-Im Bank environmental assessments for each of these projects and met with Ex-Im Bank officials in the Engineering and Environment division to discuss the environmental review process and their interaction with applicants for financing. In addition, we interviewed representatives of nine U.S. companies, including a U.S. subsidiary overseas, to obtain their views and concerns about the impact of environmental guidelines on their exports. These businesses were responsible for 82 out of the 522 long-term projects authorized between October 1995 and May 2003 and 19 of the 38 non-nuclear projects that underwent a full environmental review. The information on foreign laws or regulations in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We conducted our review from November 2002 through August 2003 in accordance with generally accepted government auditing standards. Table 4 represents the membership of the OECD’s ECG, and their respective positions on Common Approaches issues. There are 29 members of the ECG. With the United States declining to accept the Common Approaches in November 2001, 28 ECG members agreed to voluntarily adhere to the Common Approaches. By March 2003, 24 countries had reported to the OECD on their category A and B projects for 2002 (that number includes the United States, although they do not have to report since they are not technically adhering to the Common Approaches). Four adherents to the Common Approaches had not reported anything as of March 2002—the Czech Republic, Mexico, the Slovak Republic, and Turkey. Seventeen countries reported that during 2002 they had reviewed at least one category A or B project. ECAs provide financial support for a wide array of goods and services. However, projects in certain areas, such as thermal power, hydropower, and oil and gas, have been the most likely to require environmental review under the Ex-Im Bank’s or other export credit agencies’ guidelines. In this appendix, we describe five recent projects that have been subject to environmental review and briefly discuss environmental concerns associated with the projects and ECAs’ project involvement. Batu Hijau is an open pit copper and gold mine located on Indonesia’s Sumbawa Island. A consortium, comprised of U.S.-based Newmont Mining Corporation, Sumitomo (Japan), and PT Pukuafu Indah (Indonesia), operates the mine. Newmont holds majority ownership in the joint venture. Batu Hijau began operation in 2000 and is expected to continue operation for 20 years. When the mine is completely excavated, 3 billion tons of rock will have been mined, creating a mine pit that will be 2,625 meters wide (8,612 feet) and 460 meters deep (1,509 feet). As of January 2003, Batu Hijau employed approximately 6,700 people, 95 percent of whom are Indonesian. In 2002, the mine as a whole contributed more than $171 million to the Indonesian economy. The mine produced 657.7 million pounds of copper and 492 thousand ounces of gold in 2002. Batu Hijau is located primarily within a previously undisturbed tropical forest. Environmental concerns associated with Batu Hijau include loss of vegetation, specifically loss of primary tropical forest and habitat associated with the protected yellow-crested cockatoo; impact on local water levels and water quality (pH and sedimentation); disposal of large amounts of excavated rock and tailings, waste rock created during the extraction process; impact of air emissions from mine infrastructure and equipment; and maintenance of mine pit environmental programs following cessation of mine operation. In 1997, Ex-Im Bank provided $425 million in project financing to Batu Hijau project sponsors and developers. Japan’s export credit agency also provided support for Batu Hijau. Ex-Im required a number of environmental studies and project modifications designed to minimize the project’s environmental and social impacts before providing financing. Project developers have attempted to mitigate environmental concerns through, among other efforts, development of a deep-sea tailings disposal system, operation of a revegetation program, and study of water seepage patterns. The over $2 billion Camisea natural gas project, located near the Lower Urubamba River in the Echarte district in Peru, involves the extraction and processing of natural gas and natural gas liquids and the transportation of these products to markets in Lima and ports for export. Royal Dutch/Shell first discovered the Camisea gas fields in the mid-1980s, and Shell and Mobil Oil further explored the fields between 1996 and 1998. In July 1998, Shell and Mobil withdrew from the project, leading the government of Peru to pursue alternative developers. In December 2000, the government of Peru signed a series of contracts with PlusPetrol Corporation (Argentina) and with two consortiums, including Grana y Montero (Peru), Hidrocarburos Andinos (Argentina), Hunt Oil Company (USA), SK Corporation (Korea), Sonatrach (Algeria), Sucursal del Peru, Sucursal Peruana, and Techint (Argentina). The Camisea project is expected to supply a substantial portion of Peru’s energy needs and allow for natural gas export. Camisea requires construction of eight wells accessing the San Martin and Cashiriari natural gas fields; a liquid separation plant to separate water and liquid hydrocarbons; two pipelines (one for natural gas and one for natural gas liquids), one estimated to run 540 kilometers (336 miles) and the other 680 kilometers (423 miles); a coastal fractionation plant to separate liquids into commercial quality products, and an offshore loading facility. Construction of project components under the consortium contracts began in 2001 and was roughly 60 percent complete as of February 2003. The project is scheduled to begin commercial production in August 2004. The San Martin and Cashiriari gas fields together contain proven reserves of 8.7 trillion cubic feet of natural gas and 545 million barrels of natural gas liquids. Project officials estimate that project construction will employ an average of 1,700 people during the construction period. The Camisea project is located within Peru’s Amazon jungle in close proximity to several voluntarily isolated indigenous peoples. Two-thirds of the project area, sometimes referred to as Block 88, lies within the Nahua- Kugapakori Indigenous Reserve and straddles the Camisea River. The coastal fractionation plant will be located in the buffer zone of the Paracas National Reserve, Peru’s only coastal marine reserve. The project pipelines will traverse the rain forests between Camisea and the coast, passing over the Andes Mountains at an altitude of 4,500 meters (14,764 feet). The location of the project has led to major concerns about Camisea’s environmental and social impacts that include increased contact between indigenous peoples and project employees and the associated risks of epidemic disease and cultural damage, improper use of project right-of-way by Peruvians seeking fertile land and erosion and loss of biodiversity along the right-of-way, loss of biodiversity in the Camisea River and related effects on indigenous peoples dependent on the river for fish and water, lack of sufficient information on alternative sites in the environmental assessment needed to justify construction of the project’s marine terminal facility adjacent to the environmentally sensitive Paracas Bay Natural Reserve, and loss of biodiversity in Paracas National Reserve. On August 28, 2003, Ex-Im Bank’s Board, in a 2 to 1 vote, declined to support Camisea on environmental grounds. The Chad-Cameroon Petroleum Pipeline Project accesses the oil fields at Doba in southern Chad and transports the oil 1,070 kilometers (665 miles) to an off-shore oil-loading facility on Cameroon’s coast. Sponsors of the project, ExxonMobil (USA), Petronas (Malaysia), and ChevronTexaco (USA), estimate construction costs will be $3.5 billion. Estimates indicate that the government of Chad will receive a total of $2 billion in revenues from the project, while the government of Cameroon will receive $500 million, assuming reserves of 917 million barrels of oil. In Chad, revenues from the pipeline could increase total government revenues by 45 to 50 percent. In the fourth quarter of 2002, wage payments of $12 million were made to the 9,643 workers from Chad and Cameroon that the project employs. As construction concludes, project developers are reducing the workforce; nonetheless, wage payments totaling $10.1 million were made to workers from Chad and Cameroon in the first quarter of 2003. Pipeline operation began July 24, 2003, and full operation is expected to commence by the end of 2003. The World Bank Group has provided $92.9 million in direct loans to the governments of Chad and Cameroon to finance the governments’ minority holdings in the project. Additionally, the International Finance Corporation, the World Bank Group institution that facilitates private sector projects, has provided $100 million in loans to the joint venture pipeline companies and has mobilized an additional $100 million from commercial lenders. Project sponsors undertook some project modifications to meet project standards established by the World Bank and Ex-Im Bank, including alteration of the pipeline route and development of a community consultation process. Concerned NGOs, however, claim that project developers insufficiently addressed the concerns of local residents and that few changes resulted from environmental reviews. Environmental and social issues raised by both World Bank and environmental NGOs include possible oil spills occurring along the pipeline or at the offshore oil- decreases in biodiversity along the pipeline right-of-way, particularly along the Sanaga River system, within Cameroon’s Atlantic littoral rainforest, and in the Kribi coastal region; negative effects on indigenous Bakola pygmies living in the vicinity of governmental repression of opposition to the pipeline as seen in the imprisonment of a Member of Parliament as a result of his opposition to the project. Both Ex-Im Bank and France’s ECA, Coface, have provided support to the Chad-Cameroon project. In 2000, Ex-Im Bank approved $200 million in export credit guarantees for a U.S.-based engineering firm contracted to build the pipeline portion of the project. The Olkaria III geothermal power plant, located in the Olkaria Domes geothermal field near Lake Naivasha, is Kenya’s first privately developed and owned geothermal power plant. Olkaria III is the third geothermal development project undertaken in the Olkaria region but the first under ORMAT, a U.S.-based geothermal developer. Olkaria I has been operational since 1981 under the governance of Kenya Electricity Generating Company Ltd. (KenGen), a Kenyan energy state-owned enterprise. KenGen is supervising the public sector development of Olkaria II, scheduled to begin operation in September 2003. Olkaria III began operation of an early production facility in August 2000 with scheduled expansion of production from 12 megawatts to 48 megawatts. ORMAT funded the entire $50 million first phase of the Olkaria III project. In 1984, 3 years after Olkaria I began operation, but before the creation of Olkaria II and III, Kenya created Hell’s Gate National Park, including in the park the tract of land upon which the Olkaria geothermal plants are located. Additionally, indigenous Maasai peoples have historically occupied the land surrounding Lake Naivasha. These two complicating factors have led to environmental and social concerns surrounding the Olkaria developments that include possible emissions-related negative health impacts on local Maasai Maasai loss of historically occupied lands, and possible negative impacts on local flower growers and wildlife dependent upon Lake Naivasha water. Olkaria III project developers have addressed some environmental concerns through use of air-cooled geothermal technology and reinjection of geothermal fluids produced by the plant, technologies not employed in Olkaria I or II. ORMAT’s application for Ex-Im Bank support has been pending since 2001. No decision had been made as of August 28, 2003. Ex-Im officials stated that the project delay was not due to environmental concerns. The government-owned Three Gorges Dam, located on the Yangtze River in China’s Hubei Province, will be the largest hydroelectric plant in the world when it is completed in 2009. Dam construction began in 1994, and the water sluice gates were first closed on June 1, 2003. Companies headquartered throughout the world have received construction contracts. The Chinese government has undertaken construction of the dam, primarily to increase China’s power generation capacity, control downstream flooding of the Yangtze, and improve river navigation for large vessels. The annual energy generating capacity of the dam’s 26 turbine generators will be 84.7 billion kilowatt hours, generated from a renewable energy source without creating pollution. The dam itself will stand 181 meters (594 feet) high and create a reservoir stretching over 600 kilometers (373 miles). The reservoir is expected to have a floodwater storage capacity of 28.97 billion cubic yards. A multistage ship lock and lift will provide upstream navigation to river vessels. Throughout planning and construction of the Three Gorges Dam, the project has raised environmental and social concerns that include inadequate treatment of water discharged above the dam and associated health risks for communities bordering the reservoir, relocation and provision of housing and employment for the up to 1.3 million people residing in the plain of the reservoir, loss of historical and archeological artifacts located in the plain of the possibility of sedimentation limiting the dam’s ability to control flooding and increasing regional seismic activity, and alterations in the Yangtze River’s ecosystem and surrounding river basin. The Chinese government has taken steps to address several of the above issues, including relocation of the 1.3 million people affected by the dam’s construction beginning in 1995, efforts to remove historical and archeological artifacts from the reservoir area, and creation of water treatment plants upstream of the dam. The results of the government’s efforts to improve the environmental impact of the dam have been subject to debate. In May 1996, Ex-Im Bank’s Board of Directors declined to issue a letter of interest to exporters seeking a financing commitment for the Three Gorges Dam project. This action was based on a determination that the information made available to date indicated that the project as planned would not meet the Bank’s environmental guidelines. The Ex-Im Bank sent a letter in July 1996 detailing the type and scope of information that it would need to identify and assess proposed mitigation measures that could be incorporated into the project in order to meet its guidelines. That information was never provided, and project developers eventually successfully sought support from other OECD export credit agencies. Table 5 details the environmental review procedures and policies of the export credit agencies of six OECD countries that have agreed to voluntarily adhere to the Common approaches, and of Ex-Im Bank. The screening procedures, impact categories, and environmental review processes are generally similar for all of the ECAs. The main differences are in the ECAs’ public disclosure policies and in their use of technical standards for environmental reviews. In addition to the persons named above, Stephanie Robinson, Ming Chen, Laura Yannayon, Sarah Ellis Peed, Rona Mendelsohn and Jane-yu Li made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Export credit agencies (ECA) are responsible for providing billions of dollars worth of support for large-scale industrial projects annually, but until recently most ECAs did not formally review the environmental impacts of these projects. The United States, whose Export-Import Bank began using environmental guidelines in 1995, pushed for negotiations on common ECA environmental guidelines at the Organization for Economic Cooperation and Development (OECD). The OECD negotiations halted in 2001 because the United States believed that the results, called the Common Approaches, were insufficient. The remaining OECD members then pledged to voluntarily implement the Common Approaches. In response to congressional interest in ECA environmental guidelines, GAO assessed (1) the level of convergence among OECD members and the prospects for further advancement and (2) what impacts such guidelines may have on U.S. exports. Since OECD negotiations began, members have made progress in developing environmental guidelines for their ECAs and are moving toward common environmental review practices. However, important differences remain. Having agreed to voluntarily implement the Common Approaches beginning in 2002, many OECD members adopted similar basic procedures for reviewing sensitive projects. However, OECD members' guidelines and practices differ in areas where the United States believes it has among the more advanced policies, including which technical standards ECAs use to review projects and the extent to which environmental impact information is publicly disclosed. Although OECD members are considering revising the Common Approaches in 2003, the United States is unlikely to achieve all of its original negotiating objectives because of the desire by some OECD members to gain more experience with the guidelines before renegotiating them and the reluctance of other members to take any steps that might be perceived as having a negative effect on the competitiveness of their exporters. There is limited evidence that the Export-Import Bank's environmental guidelines have affected U.S. exports, although the complexity of potential effects and the lack of information make identifying and quantifying impacts difficult. The evidence GAO reviewed indicates that impacts are likely to be concentrated in the energy sector. Most Export-Import Bank transactions do not require an environmental review because they are either short-term transactions, are in certain excluded sectors, or are not considered environmentally sensitive. Finally, while some businesses are more concerned about the impacts of environmental guidelines than others, their specific concerns are largely anecdotal and difficult to confirm.